00:00:00.000 Started by upstream project "autotest-per-patch" build number 132695 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.010 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.010 The recommended git tool is: git 00:00:00.011 using credential 00000000-0000-0000-0000-000000000002 00:00:00.013 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.030 Fetching changes from the remote Git repository 00:00:00.034 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.057 Using shallow fetch with depth 1 00:00:00.057 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.057 > git --version # timeout=10 00:00:00.092 > git --version # 'git version 2.39.2' 00:00:00.092 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.129 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.129 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.023 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.035 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.048 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.048 > git config core.sparsecheckout # timeout=10 00:00:06.059 > git read-tree -mu HEAD # timeout=10 00:00:06.076 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.101 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.101 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.188 [Pipeline] Start of Pipeline 00:00:06.201 [Pipeline] library 00:00:06.203 Loading library shm_lib@master 00:00:06.203 Library shm_lib@master is cached. Copying from home. 00:00:06.220 [Pipeline] node 00:01:05.273 Still waiting to schedule task 00:01:05.273 Waiting for next available executor on ‘vagrant-vm-host’ 00:14:15.860 Running on VM-host-SM38 in /var/jenkins/workspace/raid-vg-autotest 00:14:15.863 [Pipeline] { 00:14:15.876 [Pipeline] catchError 00:14:15.878 [Pipeline] { 00:14:15.894 [Pipeline] wrap 00:14:15.905 [Pipeline] { 00:14:15.916 [Pipeline] stage 00:14:15.918 [Pipeline] { (Prologue) 00:14:15.947 [Pipeline] echo 00:14:15.949 Node: VM-host-SM38 00:14:15.961 [Pipeline] cleanWs 00:14:15.973 [WS-CLEANUP] Deleting project workspace... 00:14:15.973 [WS-CLEANUP] Deferred wipeout is used... 00:14:15.981 [WS-CLEANUP] done 00:14:16.220 [Pipeline] setCustomBuildProperty 00:14:16.321 [Pipeline] httpRequest 00:14:16.637 [Pipeline] echo 00:14:16.639 Sorcerer 10.211.164.20 is alive 00:14:16.651 [Pipeline] retry 00:14:16.654 [Pipeline] { 00:14:16.670 [Pipeline] httpRequest 00:14:16.675 HttpMethod: GET 00:14:16.676 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:14:16.677 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:14:16.678 Response Code: HTTP/1.1 200 OK 00:14:16.678 Success: Status code 200 is in the accepted range: 200,404 00:14:16.679 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:14:16.840 [Pipeline] } 00:14:16.859 [Pipeline] // retry 00:14:16.866 [Pipeline] sh 00:14:17.151 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:14:17.170 [Pipeline] httpRequest 00:14:17.479 [Pipeline] echo 00:14:17.482 Sorcerer 10.211.164.20 is alive 00:14:17.494 [Pipeline] retry 00:14:17.497 [Pipeline] { 00:14:17.513 [Pipeline] httpRequest 00:14:17.519 HttpMethod: GET 00:14:17.519 URL: http://10.211.164.20/packages/spdk_2cae84b3cd91427b94c20dfd39a930df25256880.tar.gz 00:14:17.520 Sending request to url: http://10.211.164.20/packages/spdk_2cae84b3cd91427b94c20dfd39a930df25256880.tar.gz 00:14:17.521 Response Code: HTTP/1.1 200 OK 00:14:17.522 Success: Status code 200 is in the accepted range: 200,404 00:14:17.522 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_2cae84b3cd91427b94c20dfd39a930df25256880.tar.gz 00:14:20.097 [Pipeline] } 00:14:20.118 [Pipeline] // retry 00:14:20.126 [Pipeline] sh 00:14:20.411 + tar --no-same-owner -xf spdk_2cae84b3cd91427b94c20dfd39a930df25256880.tar.gz 00:14:23.716 [Pipeline] sh 00:14:23.997 + git -C spdk log --oneline -n5 00:14:23.997 2cae84b3c lib/reduce: Support storing metadata on backing dev. (5 of 5, test cases) 00:14:23.997 a0b4fa764 lib/reduce: Support storing metadata on backing dev. (4 of 5, data unmap with async metadata) 00:14:23.997 080d93a73 lib/reduce: Support storing metadata on backing dev. (3 of 5, reload process) 00:14:23.997 62083ef48 lib/reduce: Support storing metadata on backing dev. (2 of 5, data r/w with async metadata) 00:14:23.997 289f56464 lib/reduce: Support storing metadata on backing dev. (1 of 5, struct define and init process) 00:14:24.016 [Pipeline] writeFile 00:14:24.031 [Pipeline] sh 00:14:24.316 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:14:24.330 [Pipeline] sh 00:14:24.614 + cat autorun-spdk.conf 00:14:24.614 SPDK_RUN_FUNCTIONAL_TEST=1 00:14:24.614 SPDK_RUN_ASAN=1 00:14:24.614 SPDK_RUN_UBSAN=1 00:14:24.614 SPDK_TEST_RAID=1 00:14:24.614 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:14:24.622 RUN_NIGHTLY=0 00:14:24.624 [Pipeline] } 00:14:24.639 [Pipeline] // stage 00:14:24.655 [Pipeline] stage 00:14:24.658 [Pipeline] { (Run VM) 00:14:24.671 [Pipeline] sh 00:14:24.955 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:14:24.955 + echo 'Start stage prepare_nvme.sh' 00:14:24.956 Start stage prepare_nvme.sh 00:14:24.956 + [[ -n 2 ]] 00:14:24.956 + disk_prefix=ex2 00:14:24.956 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:14:24.956 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:14:24.956 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:14:24.956 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:14:24.956 ++ SPDK_RUN_ASAN=1 00:14:24.956 ++ SPDK_RUN_UBSAN=1 00:14:24.956 ++ SPDK_TEST_RAID=1 00:14:24.956 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:14:24.956 ++ RUN_NIGHTLY=0 00:14:24.956 + cd /var/jenkins/workspace/raid-vg-autotest 00:14:24.956 + nvme_files=() 00:14:24.956 + declare -A nvme_files 00:14:24.956 + backend_dir=/var/lib/libvirt/images/backends 00:14:24.956 + nvme_files['nvme.img']=5G 00:14:24.956 + nvme_files['nvme-cmb.img']=5G 00:14:24.956 + nvme_files['nvme-multi0.img']=4G 00:14:24.956 + nvme_files['nvme-multi1.img']=4G 00:14:24.956 + nvme_files['nvme-multi2.img']=4G 00:14:24.956 + nvme_files['nvme-openstack.img']=8G 00:14:24.956 + nvme_files['nvme-zns.img']=5G 00:14:24.956 + (( SPDK_TEST_NVME_PMR == 1 )) 00:14:24.956 + (( SPDK_TEST_FTL == 1 )) 00:14:24.956 + (( SPDK_TEST_NVME_FDP == 1 )) 00:14:24.956 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:14:24.956 + for nvme in "${!nvme_files[@]}" 00:14:24.956 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:14:24.956 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:14:24.956 + for nvme in "${!nvme_files[@]}" 00:14:24.956 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:14:24.956 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:14:24.956 + for nvme in "${!nvme_files[@]}" 00:14:24.956 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:14:24.956 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:14:24.956 + for nvme in "${!nvme_files[@]}" 00:14:24.956 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:14:24.956 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:14:24.956 + for nvme in "${!nvme_files[@]}" 00:14:24.956 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:14:24.956 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:14:24.956 + for nvme in "${!nvme_files[@]}" 00:14:24.956 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:14:25.217 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:14:25.217 + for nvme in "${!nvme_files[@]}" 00:14:25.217 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:14:25.217 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:14:25.217 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:14:25.217 + echo 'End stage prepare_nvme.sh' 00:14:25.217 End stage prepare_nvme.sh 00:14:25.231 [Pipeline] sh 00:14:25.518 + DISTRO=fedora39 00:14:25.518 + CPUS=10 00:14:25.518 + RAM=12288 00:14:25.518 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:14:25.518 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora39 00:14:25.518 00:14:25.518 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:14:25.518 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:14:25.518 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:14:25.518 HELP=0 00:14:25.518 DRY_RUN=0 00:14:25.518 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:14:25.518 NVME_DISKS_TYPE=nvme,nvme, 00:14:25.518 NVME_AUTO_CREATE=0 00:14:25.518 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:14:25.518 NVME_CMB=,, 00:14:25.518 NVME_PMR=,, 00:14:25.518 NVME_ZNS=,, 00:14:25.518 NVME_MS=,, 00:14:25.518 NVME_FDP=,, 00:14:25.518 SPDK_VAGRANT_DISTRO=fedora39 00:14:25.518 SPDK_VAGRANT_VMCPU=10 00:14:25.518 SPDK_VAGRANT_VMRAM=12288 00:14:25.518 SPDK_VAGRANT_PROVIDER=libvirt 00:14:25.518 SPDK_VAGRANT_HTTP_PROXY= 00:14:25.518 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:14:25.518 SPDK_OPENSTACK_NETWORK=0 00:14:25.518 VAGRANT_PACKAGE_BOX=0 00:14:25.518 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:14:25.518 FORCE_DISTRO=true 00:14:25.518 VAGRANT_BOX_VERSION= 00:14:25.518 EXTRA_VAGRANTFILES= 00:14:25.518 NIC_MODEL=e1000 00:14:25.518 00:14:25.518 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:14:25.518 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:14:28.128 Bringing machine 'default' up with 'libvirt' provider... 00:14:28.391 ==> default: Creating image (snapshot of base box volume). 00:14:28.651 ==> default: Creating domain with the following settings... 00:14:28.651 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733402770_2c8a91534c07a9dce3c6 00:14:28.651 ==> default: -- Domain type: kvm 00:14:28.651 ==> default: -- Cpus: 10 00:14:28.651 ==> default: -- Feature: acpi 00:14:28.651 ==> default: -- Feature: apic 00:14:28.651 ==> default: -- Feature: pae 00:14:28.651 ==> default: -- Memory: 12288M 00:14:28.651 ==> default: -- Memory Backing: hugepages: 00:14:28.651 ==> default: -- Management MAC: 00:14:28.651 ==> default: -- Loader: 00:14:28.651 ==> default: -- Nvram: 00:14:28.651 ==> default: -- Base box: spdk/fedora39 00:14:28.651 ==> default: -- Storage pool: default 00:14:28.651 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733402770_2c8a91534c07a9dce3c6.img (20G) 00:14:28.651 ==> default: -- Volume Cache: default 00:14:28.651 ==> default: -- Kernel: 00:14:28.651 ==> default: -- Initrd: 00:14:28.651 ==> default: -- Graphics Type: vnc 00:14:28.651 ==> default: -- Graphics Port: -1 00:14:28.651 ==> default: -- Graphics IP: 127.0.0.1 00:14:28.651 ==> default: -- Graphics Password: Not defined 00:14:28.651 ==> default: -- Video Type: cirrus 00:14:28.651 ==> default: -- Video VRAM: 9216 00:14:28.651 ==> default: -- Sound Type: 00:14:28.651 ==> default: -- Keymap: en-us 00:14:28.651 ==> default: -- TPM Path: 00:14:28.651 ==> default: -- INPUT: type=mouse, bus=ps2 00:14:28.651 ==> default: -- Command line args: 00:14:28.651 ==> default: -> value=-device, 00:14:28.651 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:14:28.651 ==> default: -> value=-drive, 00:14:28.651 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:14:28.651 ==> default: -> value=-device, 00:14:28.651 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:14:28.651 ==> default: -> value=-device, 00:14:28.651 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:14:28.651 ==> default: -> value=-drive, 00:14:28.651 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:14:28.651 ==> default: -> value=-device, 00:14:28.651 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:14:28.651 ==> default: -> value=-drive, 00:14:28.651 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:14:28.651 ==> default: -> value=-device, 00:14:28.651 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:14:28.651 ==> default: -> value=-drive, 00:14:28.651 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:14:28.651 ==> default: -> value=-device, 00:14:28.651 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:14:28.651 ==> default: Creating shared folders metadata... 00:14:28.912 ==> default: Starting domain. 00:14:29.858 ==> default: Waiting for domain to get an IP address... 00:14:44.763 ==> default: Waiting for SSH to become available... 00:14:44.763 ==> default: Configuring and enabling network interfaces... 00:14:47.311 default: SSH address: 192.168.121.174:22 00:14:47.311 default: SSH username: vagrant 00:14:47.311 default: SSH auth method: private key 00:14:49.246 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:14:55.836 ==> default: Mounting SSHFS shared folder... 00:14:56.774 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:14:56.774 ==> default: Checking Mount.. 00:14:58.157 ==> default: Folder Successfully Mounted! 00:14:58.157 00:14:58.157 SUCCESS! 00:14:58.157 00:14:58.157 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:14:58.157 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:14:58.157 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:14:58.157 00:14:58.168 [Pipeline] } 00:14:58.186 [Pipeline] // stage 00:14:58.195 [Pipeline] dir 00:14:58.196 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:14:58.198 [Pipeline] { 00:14:58.212 [Pipeline] catchError 00:14:58.215 [Pipeline] { 00:14:58.228 [Pipeline] sh 00:14:58.513 + vagrant ssh-config --host vagrant 00:14:58.513 + sed -ne '/^Host/,$p' 00:14:58.513 + tee ssh_conf 00:15:01.060 Host vagrant 00:15:01.060 HostName 192.168.121.174 00:15:01.060 User vagrant 00:15:01.060 Port 22 00:15:01.060 UserKnownHostsFile /dev/null 00:15:01.060 StrictHostKeyChecking no 00:15:01.060 PasswordAuthentication no 00:15:01.060 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:15:01.060 IdentitiesOnly yes 00:15:01.060 LogLevel FATAL 00:15:01.060 ForwardAgent yes 00:15:01.060 ForwardX11 yes 00:15:01.060 00:15:01.076 [Pipeline] withEnv 00:15:01.078 [Pipeline] { 00:15:01.092 [Pipeline] sh 00:15:01.373 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:15:01.373 source /etc/os-release 00:15:01.373 [[ -e /image.version ]] && img=$(< /image.version) 00:15:01.373 # Minimal, systemd-like check. 00:15:01.373 if [[ -e /.dockerenv ]]; then 00:15:01.373 # Clear garbage from the node'\''s name: 00:15:01.373 # agt-er_autotest_547-896 -> autotest_547-896 00:15:01.373 # $HOSTNAME is the actual container id 00:15:01.373 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:15:01.373 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:15:01.373 # We can assume this is a mount from a host where container is running, 00:15:01.373 # so fetch its hostname to easily identify the target swarm worker. 00:15:01.373 container="$(< /etc/hostname) ($agent)" 00:15:01.373 else 00:15:01.373 # Fallback 00:15:01.373 container=$agent 00:15:01.373 fi 00:15:01.373 fi 00:15:01.373 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:15:01.373 ' 00:15:01.387 [Pipeline] } 00:15:01.405 [Pipeline] // withEnv 00:15:01.414 [Pipeline] setCustomBuildProperty 00:15:01.429 [Pipeline] stage 00:15:01.431 [Pipeline] { (Tests) 00:15:01.448 [Pipeline] sh 00:15:01.733 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:15:01.749 [Pipeline] sh 00:15:02.034 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:15:02.050 [Pipeline] timeout 00:15:02.051 Timeout set to expire in 1 hr 30 min 00:15:02.053 [Pipeline] { 00:15:02.069 [Pipeline] sh 00:15:02.396 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:15:02.656 HEAD is now at 2cae84b3c lib/reduce: Support storing metadata on backing dev. (5 of 5, test cases) 00:15:02.671 [Pipeline] sh 00:15:02.972 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:15:02.987 [Pipeline] sh 00:15:03.269 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:15:03.286 [Pipeline] sh 00:15:03.570 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo' 00:15:03.570 ++ readlink -f spdk_repo 00:15:03.570 + DIR_ROOT=/home/vagrant/spdk_repo 00:15:03.570 + [[ -n /home/vagrant/spdk_repo ]] 00:15:03.570 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:15:03.570 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:15:03.570 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:15:03.570 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:15:03.570 + [[ -d /home/vagrant/spdk_repo/output ]] 00:15:03.570 + [[ raid-vg-autotest == pkgdep-* ]] 00:15:03.570 + cd /home/vagrant/spdk_repo 00:15:03.571 + source /etc/os-release 00:15:03.571 ++ NAME='Fedora Linux' 00:15:03.571 ++ VERSION='39 (Cloud Edition)' 00:15:03.571 ++ ID=fedora 00:15:03.571 ++ VERSION_ID=39 00:15:03.571 ++ VERSION_CODENAME= 00:15:03.571 ++ PLATFORM_ID=platform:f39 00:15:03.571 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:15:03.571 ++ ANSI_COLOR='0;38;2;60;110;180' 00:15:03.571 ++ LOGO=fedora-logo-icon 00:15:03.571 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:15:03.571 ++ HOME_URL=https://fedoraproject.org/ 00:15:03.571 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:15:03.571 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:15:03.571 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:15:03.571 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:15:03.571 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:15:03.571 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:15:03.571 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:15:03.571 ++ SUPPORT_END=2024-11-12 00:15:03.571 ++ VARIANT='Cloud Edition' 00:15:03.571 ++ VARIANT_ID=cloud 00:15:03.571 + uname -a 00:15:03.571 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:15:03.571 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:15:04.143 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:04.143 Hugepages 00:15:04.143 node hugesize free / total 00:15:04.143 node0 1048576kB 0 / 0 00:15:04.143 node0 2048kB 0 / 0 00:15:04.143 00:15:04.143 Type BDF Vendor Device NUMA Driver Device Block devices 00:15:04.143 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:15:04.143 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:15:04.143 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:15:04.143 + rm -f /tmp/spdk-ld-path 00:15:04.143 + source autorun-spdk.conf 00:15:04.143 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:15:04.143 ++ SPDK_RUN_ASAN=1 00:15:04.143 ++ SPDK_RUN_UBSAN=1 00:15:04.143 ++ SPDK_TEST_RAID=1 00:15:04.143 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:15:04.143 ++ RUN_NIGHTLY=0 00:15:04.143 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:15:04.143 + [[ -n '' ]] 00:15:04.143 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:15:04.143 + for M in /var/spdk/build-*-manifest.txt 00:15:04.143 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:15:04.143 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:15:04.143 + for M in /var/spdk/build-*-manifest.txt 00:15:04.143 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:15:04.143 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:15:04.143 + for M in /var/spdk/build-*-manifest.txt 00:15:04.143 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:15:04.143 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:15:04.143 ++ uname 00:15:04.143 + [[ Linux == \L\i\n\u\x ]] 00:15:04.143 + sudo dmesg -T 00:15:04.143 + sudo dmesg --clear 00:15:04.143 + dmesg_pid=4979 00:15:04.143 + sudo dmesg -Tw 00:15:04.143 + [[ Fedora Linux == FreeBSD ]] 00:15:04.143 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:04.143 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:04.143 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:15:04.143 + [[ -x /usr/src/fio-static/fio ]] 00:15:04.143 + export FIO_BIN=/usr/src/fio-static/fio 00:15:04.143 + FIO_BIN=/usr/src/fio-static/fio 00:15:04.143 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:15:04.143 + [[ ! -v VFIO_QEMU_BIN ]] 00:15:04.143 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:15:04.143 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:04.143 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:04.143 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:15:04.143 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:04.143 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:04.143 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:15:04.143 12:46:46 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:15:04.143 12:46:46 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:15:04.143 12:46:46 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:15:04.143 12:46:46 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:15:04.143 12:46:46 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:15:04.143 12:46:46 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:15:04.143 12:46:46 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:15:04.143 12:46:46 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:15:04.143 12:46:46 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:15:04.143 12:46:46 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:15:04.404 12:46:46 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:15:04.404 12:46:46 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:04.404 12:46:46 -- scripts/common.sh@15 -- $ shopt -s extglob 00:15:04.404 12:46:46 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:15:04.404 12:46:46 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:04.404 12:46:46 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:04.404 12:46:46 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.404 12:46:46 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.404 12:46:46 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.404 12:46:46 -- paths/export.sh@5 -- $ export PATH 00:15:04.404 12:46:46 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.404 12:46:46 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:15:04.404 12:46:46 -- common/autobuild_common.sh@493 -- $ date +%s 00:15:04.404 12:46:46 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733402806.XXXXXX 00:15:04.404 12:46:46 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733402806.0gMAeA 00:15:04.404 12:46:46 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:15:04.404 12:46:46 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:15:04.404 12:46:46 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:15:04.404 12:46:46 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:15:04.404 12:46:46 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:15:04.404 12:46:46 -- common/autobuild_common.sh@509 -- $ get_config_params 00:15:04.404 12:46:46 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:15:04.404 12:46:46 -- common/autotest_common.sh@10 -- $ set +x 00:15:04.404 12:46:46 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:15:04.404 12:46:46 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:15:04.404 12:46:46 -- pm/common@17 -- $ local monitor 00:15:04.404 12:46:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:15:04.404 12:46:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:15:04.404 12:46:46 -- pm/common@25 -- $ sleep 1 00:15:04.404 12:46:46 -- pm/common@21 -- $ date +%s 00:15:04.404 12:46:46 -- pm/common@21 -- $ date +%s 00:15:04.404 12:46:46 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733402806 00:15:04.404 12:46:46 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733402806 00:15:04.404 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733402806_collect-cpu-load.pm.log 00:15:04.404 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733402806_collect-vmstat.pm.log 00:15:05.342 12:46:47 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:15:05.342 12:46:47 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:15:05.342 12:46:47 -- spdk/autobuild.sh@12 -- $ umask 022 00:15:05.342 12:46:47 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:15:05.342 12:46:47 -- spdk/autobuild.sh@16 -- $ date -u 00:15:05.342 Thu Dec 5 12:46:47 PM UTC 2024 00:15:05.342 12:46:47 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:15:05.342 v25.01-pre-301-g2cae84b3c 00:15:05.342 12:46:47 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:15:05.342 12:46:47 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:15:05.342 12:46:47 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:15:05.342 12:46:47 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:15:05.342 12:46:47 -- common/autotest_common.sh@10 -- $ set +x 00:15:05.342 ************************************ 00:15:05.342 START TEST asan 00:15:05.342 ************************************ 00:15:05.342 using asan 00:15:05.342 12:46:47 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:15:05.342 00:15:05.342 real 0m0.000s 00:15:05.342 user 0m0.000s 00:15:05.342 sys 0m0.000s 00:15:05.342 12:46:47 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:15:05.342 12:46:47 asan -- common/autotest_common.sh@10 -- $ set +x 00:15:05.342 ************************************ 00:15:05.342 END TEST asan 00:15:05.342 ************************************ 00:15:05.342 12:46:47 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:15:05.342 12:46:47 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:15:05.342 12:46:47 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:15:05.342 12:46:47 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:15:05.342 12:46:47 -- common/autotest_common.sh@10 -- $ set +x 00:15:05.342 ************************************ 00:15:05.342 START TEST ubsan 00:15:05.342 ************************************ 00:15:05.342 using ubsan 00:15:05.342 12:46:47 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:15:05.342 00:15:05.342 real 0m0.000s 00:15:05.342 user 0m0.000s 00:15:05.342 sys 0m0.000s 00:15:05.342 12:46:47 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:15:05.342 12:46:47 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:15:05.342 ************************************ 00:15:05.342 END TEST ubsan 00:15:05.342 ************************************ 00:15:05.342 12:46:47 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:15:05.342 12:46:47 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:15:05.342 12:46:47 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:15:05.342 12:46:47 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:15:05.342 12:46:47 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:15:05.342 12:46:47 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:15:05.342 12:46:47 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:15:05.343 12:46:47 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:15:05.343 12:46:47 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:15:05.601 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:15:05.601 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:15:05.859 Using 'verbs' RDMA provider 00:15:16.407 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:15:26.383 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:15:26.644 Creating mk/config.mk...done. 00:15:26.644 Creating mk/cc.flags.mk...done. 00:15:26.644 Type 'make' to build. 00:15:26.644 12:47:09 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:15:26.644 12:47:09 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:15:26.644 12:47:09 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:15:26.644 12:47:09 -- common/autotest_common.sh@10 -- $ set +x 00:15:26.644 ************************************ 00:15:26.644 START TEST make 00:15:26.644 ************************************ 00:15:26.644 12:47:09 make -- common/autotest_common.sh@1129 -- $ make -j10 00:15:26.905 make[1]: Nothing to be done for 'all'. 00:15:36.893 The Meson build system 00:15:36.894 Version: 1.5.0 00:15:36.894 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:15:36.894 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:15:36.894 Build type: native build 00:15:36.894 Program cat found: YES (/usr/bin/cat) 00:15:36.894 Project name: DPDK 00:15:36.894 Project version: 24.03.0 00:15:36.894 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:15:36.894 C linker for the host machine: cc ld.bfd 2.40-14 00:15:36.894 Host machine cpu family: x86_64 00:15:36.894 Host machine cpu: x86_64 00:15:36.894 Message: ## Building in Developer Mode ## 00:15:36.894 Program pkg-config found: YES (/usr/bin/pkg-config) 00:15:36.894 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:15:36.894 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:15:36.894 Program python3 found: YES (/usr/bin/python3) 00:15:36.894 Program cat found: YES (/usr/bin/cat) 00:15:36.894 Compiler for C supports arguments -march=native: YES 00:15:36.894 Checking for size of "void *" : 8 00:15:36.894 Checking for size of "void *" : 8 (cached) 00:15:36.894 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:15:36.894 Library m found: YES 00:15:36.894 Library numa found: YES 00:15:36.894 Has header "numaif.h" : YES 00:15:36.894 Library fdt found: NO 00:15:36.894 Library execinfo found: NO 00:15:36.894 Has header "execinfo.h" : YES 00:15:36.894 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:15:36.894 Run-time dependency libarchive found: NO (tried pkgconfig) 00:15:36.894 Run-time dependency libbsd found: NO (tried pkgconfig) 00:15:36.894 Run-time dependency jansson found: NO (tried pkgconfig) 00:15:36.894 Run-time dependency openssl found: YES 3.1.1 00:15:36.894 Run-time dependency libpcap found: YES 1.10.4 00:15:36.894 Has header "pcap.h" with dependency libpcap: YES 00:15:36.894 Compiler for C supports arguments -Wcast-qual: YES 00:15:36.894 Compiler for C supports arguments -Wdeprecated: YES 00:15:36.894 Compiler for C supports arguments -Wformat: YES 00:15:36.894 Compiler for C supports arguments -Wformat-nonliteral: NO 00:15:36.894 Compiler for C supports arguments -Wformat-security: NO 00:15:36.894 Compiler for C supports arguments -Wmissing-declarations: YES 00:15:36.894 Compiler for C supports arguments -Wmissing-prototypes: YES 00:15:36.894 Compiler for C supports arguments -Wnested-externs: YES 00:15:36.894 Compiler for C supports arguments -Wold-style-definition: YES 00:15:36.894 Compiler for C supports arguments -Wpointer-arith: YES 00:15:36.894 Compiler for C supports arguments -Wsign-compare: YES 00:15:36.894 Compiler for C supports arguments -Wstrict-prototypes: YES 00:15:36.894 Compiler for C supports arguments -Wundef: YES 00:15:36.894 Compiler for C supports arguments -Wwrite-strings: YES 00:15:36.894 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:15:36.894 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:15:36.894 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:15:36.894 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:15:36.894 Program objdump found: YES (/usr/bin/objdump) 00:15:36.894 Compiler for C supports arguments -mavx512f: YES 00:15:36.894 Checking if "AVX512 checking" compiles: YES 00:15:36.894 Fetching value of define "__SSE4_2__" : 1 00:15:36.894 Fetching value of define "__AES__" : 1 00:15:36.894 Fetching value of define "__AVX__" : 1 00:15:36.894 Fetching value of define "__AVX2__" : 1 00:15:36.894 Fetching value of define "__AVX512BW__" : 1 00:15:36.894 Fetching value of define "__AVX512CD__" : 1 00:15:36.894 Fetching value of define "__AVX512DQ__" : 1 00:15:36.894 Fetching value of define "__AVX512F__" : 1 00:15:36.894 Fetching value of define "__AVX512VL__" : 1 00:15:36.894 Fetching value of define "__PCLMUL__" : 1 00:15:36.894 Fetching value of define "__RDRND__" : 1 00:15:36.894 Fetching value of define "__RDSEED__" : 1 00:15:36.894 Fetching value of define "__VPCLMULQDQ__" : 1 00:15:36.894 Fetching value of define "__znver1__" : (undefined) 00:15:36.894 Fetching value of define "__znver2__" : (undefined) 00:15:36.894 Fetching value of define "__znver3__" : (undefined) 00:15:36.894 Fetching value of define "__znver4__" : (undefined) 00:15:36.894 Library asan found: YES 00:15:36.894 Compiler for C supports arguments -Wno-format-truncation: YES 00:15:36.894 Message: lib/log: Defining dependency "log" 00:15:36.894 Message: lib/kvargs: Defining dependency "kvargs" 00:15:36.894 Message: lib/telemetry: Defining dependency "telemetry" 00:15:36.894 Library rt found: YES 00:15:36.894 Checking for function "getentropy" : NO 00:15:36.894 Message: lib/eal: Defining dependency "eal" 00:15:36.894 Message: lib/ring: Defining dependency "ring" 00:15:36.894 Message: lib/rcu: Defining dependency "rcu" 00:15:36.894 Message: lib/mempool: Defining dependency "mempool" 00:15:36.894 Message: lib/mbuf: Defining dependency "mbuf" 00:15:36.894 Fetching value of define "__PCLMUL__" : 1 (cached) 00:15:36.894 Fetching value of define "__AVX512F__" : 1 (cached) 00:15:36.894 Fetching value of define "__AVX512BW__" : 1 (cached) 00:15:36.894 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:15:36.894 Fetching value of define "__AVX512VL__" : 1 (cached) 00:15:36.894 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:15:36.894 Compiler for C supports arguments -mpclmul: YES 00:15:36.894 Compiler for C supports arguments -maes: YES 00:15:36.894 Compiler for C supports arguments -mavx512f: YES (cached) 00:15:36.894 Compiler for C supports arguments -mavx512bw: YES 00:15:36.894 Compiler for C supports arguments -mavx512dq: YES 00:15:36.894 Compiler for C supports arguments -mavx512vl: YES 00:15:36.894 Compiler for C supports arguments -mvpclmulqdq: YES 00:15:36.894 Compiler for C supports arguments -mavx2: YES 00:15:36.894 Compiler for C supports arguments -mavx: YES 00:15:36.894 Message: lib/net: Defining dependency "net" 00:15:36.894 Message: lib/meter: Defining dependency "meter" 00:15:36.894 Message: lib/ethdev: Defining dependency "ethdev" 00:15:36.894 Message: lib/pci: Defining dependency "pci" 00:15:36.894 Message: lib/cmdline: Defining dependency "cmdline" 00:15:36.894 Message: lib/hash: Defining dependency "hash" 00:15:36.894 Message: lib/timer: Defining dependency "timer" 00:15:36.894 Message: lib/compressdev: Defining dependency "compressdev" 00:15:36.894 Message: lib/cryptodev: Defining dependency "cryptodev" 00:15:36.894 Message: lib/dmadev: Defining dependency "dmadev" 00:15:36.894 Compiler for C supports arguments -Wno-cast-qual: YES 00:15:36.894 Message: lib/power: Defining dependency "power" 00:15:36.894 Message: lib/reorder: Defining dependency "reorder" 00:15:36.894 Message: lib/security: Defining dependency "security" 00:15:36.894 Has header "linux/userfaultfd.h" : YES 00:15:36.894 Has header "linux/vduse.h" : YES 00:15:36.894 Message: lib/vhost: Defining dependency "vhost" 00:15:36.894 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:15:36.894 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:15:36.894 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:15:36.894 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:15:36.894 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:15:36.894 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:15:36.894 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:15:36.894 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:15:36.894 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:15:36.894 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:15:36.894 Program doxygen found: YES (/usr/local/bin/doxygen) 00:15:36.894 Configuring doxy-api-html.conf using configuration 00:15:36.894 Configuring doxy-api-man.conf using configuration 00:15:36.894 Program mandb found: YES (/usr/bin/mandb) 00:15:36.894 Program sphinx-build found: NO 00:15:36.894 Configuring rte_build_config.h using configuration 00:15:36.894 Message: 00:15:36.894 ================= 00:15:36.894 Applications Enabled 00:15:36.894 ================= 00:15:36.894 00:15:36.894 apps: 00:15:36.894 00:15:36.894 00:15:36.894 Message: 00:15:36.894 ================= 00:15:36.894 Libraries Enabled 00:15:36.894 ================= 00:15:36.894 00:15:36.894 libs: 00:15:36.894 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:15:36.894 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:15:36.894 cryptodev, dmadev, power, reorder, security, vhost, 00:15:36.894 00:15:36.894 Message: 00:15:36.894 =============== 00:15:36.894 Drivers Enabled 00:15:36.894 =============== 00:15:36.894 00:15:36.894 common: 00:15:36.894 00:15:36.894 bus: 00:15:36.894 pci, vdev, 00:15:36.894 mempool: 00:15:36.894 ring, 00:15:36.894 dma: 00:15:36.894 00:15:36.894 net: 00:15:36.894 00:15:36.894 crypto: 00:15:36.894 00:15:36.894 compress: 00:15:36.894 00:15:36.894 vdpa: 00:15:36.894 00:15:36.894 00:15:36.894 Message: 00:15:36.894 ================= 00:15:36.894 Content Skipped 00:15:36.894 ================= 00:15:36.894 00:15:36.894 apps: 00:15:36.894 dumpcap: explicitly disabled via build config 00:15:36.894 graph: explicitly disabled via build config 00:15:36.894 pdump: explicitly disabled via build config 00:15:36.894 proc-info: explicitly disabled via build config 00:15:36.894 test-acl: explicitly disabled via build config 00:15:36.894 test-bbdev: explicitly disabled via build config 00:15:36.894 test-cmdline: explicitly disabled via build config 00:15:36.894 test-compress-perf: explicitly disabled via build config 00:15:36.894 test-crypto-perf: explicitly disabled via build config 00:15:36.894 test-dma-perf: explicitly disabled via build config 00:15:36.894 test-eventdev: explicitly disabled via build config 00:15:36.894 test-fib: explicitly disabled via build config 00:15:36.894 test-flow-perf: explicitly disabled via build config 00:15:36.894 test-gpudev: explicitly disabled via build config 00:15:36.894 test-mldev: explicitly disabled via build config 00:15:36.894 test-pipeline: explicitly disabled via build config 00:15:36.894 test-pmd: explicitly disabled via build config 00:15:36.894 test-regex: explicitly disabled via build config 00:15:36.894 test-sad: explicitly disabled via build config 00:15:36.894 test-security-perf: explicitly disabled via build config 00:15:36.894 00:15:36.894 libs: 00:15:36.895 argparse: explicitly disabled via build config 00:15:36.895 metrics: explicitly disabled via build config 00:15:36.895 acl: explicitly disabled via build config 00:15:36.895 bbdev: explicitly disabled via build config 00:15:36.895 bitratestats: explicitly disabled via build config 00:15:36.895 bpf: explicitly disabled via build config 00:15:36.895 cfgfile: explicitly disabled via build config 00:15:36.895 distributor: explicitly disabled via build config 00:15:36.895 efd: explicitly disabled via build config 00:15:36.895 eventdev: explicitly disabled via build config 00:15:36.895 dispatcher: explicitly disabled via build config 00:15:36.895 gpudev: explicitly disabled via build config 00:15:36.895 gro: explicitly disabled via build config 00:15:36.895 gso: explicitly disabled via build config 00:15:36.895 ip_frag: explicitly disabled via build config 00:15:36.895 jobstats: explicitly disabled via build config 00:15:36.895 latencystats: explicitly disabled via build config 00:15:36.895 lpm: explicitly disabled via build config 00:15:36.895 member: explicitly disabled via build config 00:15:36.895 pcapng: explicitly disabled via build config 00:15:36.895 rawdev: explicitly disabled via build config 00:15:36.895 regexdev: explicitly disabled via build config 00:15:36.895 mldev: explicitly disabled via build config 00:15:36.895 rib: explicitly disabled via build config 00:15:36.895 sched: explicitly disabled via build config 00:15:36.895 stack: explicitly disabled via build config 00:15:36.895 ipsec: explicitly disabled via build config 00:15:36.895 pdcp: explicitly disabled via build config 00:15:36.895 fib: explicitly disabled via build config 00:15:36.895 port: explicitly disabled via build config 00:15:36.895 pdump: explicitly disabled via build config 00:15:36.895 table: explicitly disabled via build config 00:15:36.895 pipeline: explicitly disabled via build config 00:15:36.895 graph: explicitly disabled via build config 00:15:36.895 node: explicitly disabled via build config 00:15:36.895 00:15:36.895 drivers: 00:15:36.895 common/cpt: not in enabled drivers build config 00:15:36.895 common/dpaax: not in enabled drivers build config 00:15:36.895 common/iavf: not in enabled drivers build config 00:15:36.895 common/idpf: not in enabled drivers build config 00:15:36.895 common/ionic: not in enabled drivers build config 00:15:36.895 common/mvep: not in enabled drivers build config 00:15:36.895 common/octeontx: not in enabled drivers build config 00:15:36.895 bus/auxiliary: not in enabled drivers build config 00:15:36.895 bus/cdx: not in enabled drivers build config 00:15:36.895 bus/dpaa: not in enabled drivers build config 00:15:36.895 bus/fslmc: not in enabled drivers build config 00:15:36.895 bus/ifpga: not in enabled drivers build config 00:15:36.895 bus/platform: not in enabled drivers build config 00:15:36.895 bus/uacce: not in enabled drivers build config 00:15:36.895 bus/vmbus: not in enabled drivers build config 00:15:36.895 common/cnxk: not in enabled drivers build config 00:15:36.895 common/mlx5: not in enabled drivers build config 00:15:36.895 common/nfp: not in enabled drivers build config 00:15:36.895 common/nitrox: not in enabled drivers build config 00:15:36.895 common/qat: not in enabled drivers build config 00:15:36.895 common/sfc_efx: not in enabled drivers build config 00:15:36.895 mempool/bucket: not in enabled drivers build config 00:15:36.895 mempool/cnxk: not in enabled drivers build config 00:15:36.895 mempool/dpaa: not in enabled drivers build config 00:15:36.895 mempool/dpaa2: not in enabled drivers build config 00:15:36.895 mempool/octeontx: not in enabled drivers build config 00:15:36.895 mempool/stack: not in enabled drivers build config 00:15:36.895 dma/cnxk: not in enabled drivers build config 00:15:36.895 dma/dpaa: not in enabled drivers build config 00:15:36.895 dma/dpaa2: not in enabled drivers build config 00:15:36.895 dma/hisilicon: not in enabled drivers build config 00:15:36.895 dma/idxd: not in enabled drivers build config 00:15:36.895 dma/ioat: not in enabled drivers build config 00:15:36.895 dma/skeleton: not in enabled drivers build config 00:15:36.895 net/af_packet: not in enabled drivers build config 00:15:36.895 net/af_xdp: not in enabled drivers build config 00:15:36.895 net/ark: not in enabled drivers build config 00:15:36.895 net/atlantic: not in enabled drivers build config 00:15:36.895 net/avp: not in enabled drivers build config 00:15:36.895 net/axgbe: not in enabled drivers build config 00:15:36.895 net/bnx2x: not in enabled drivers build config 00:15:36.895 net/bnxt: not in enabled drivers build config 00:15:36.895 net/bonding: not in enabled drivers build config 00:15:36.895 net/cnxk: not in enabled drivers build config 00:15:36.895 net/cpfl: not in enabled drivers build config 00:15:36.895 net/cxgbe: not in enabled drivers build config 00:15:36.895 net/dpaa: not in enabled drivers build config 00:15:36.895 net/dpaa2: not in enabled drivers build config 00:15:36.895 net/e1000: not in enabled drivers build config 00:15:36.895 net/ena: not in enabled drivers build config 00:15:36.895 net/enetc: not in enabled drivers build config 00:15:36.895 net/enetfec: not in enabled drivers build config 00:15:36.895 net/enic: not in enabled drivers build config 00:15:36.895 net/failsafe: not in enabled drivers build config 00:15:36.895 net/fm10k: not in enabled drivers build config 00:15:36.895 net/gve: not in enabled drivers build config 00:15:36.895 net/hinic: not in enabled drivers build config 00:15:36.895 net/hns3: not in enabled drivers build config 00:15:36.895 net/i40e: not in enabled drivers build config 00:15:36.895 net/iavf: not in enabled drivers build config 00:15:36.895 net/ice: not in enabled drivers build config 00:15:36.895 net/idpf: not in enabled drivers build config 00:15:36.895 net/igc: not in enabled drivers build config 00:15:36.895 net/ionic: not in enabled drivers build config 00:15:36.895 net/ipn3ke: not in enabled drivers build config 00:15:36.895 net/ixgbe: not in enabled drivers build config 00:15:36.895 net/mana: not in enabled drivers build config 00:15:36.895 net/memif: not in enabled drivers build config 00:15:36.895 net/mlx4: not in enabled drivers build config 00:15:36.895 net/mlx5: not in enabled drivers build config 00:15:36.895 net/mvneta: not in enabled drivers build config 00:15:36.895 net/mvpp2: not in enabled drivers build config 00:15:36.895 net/netvsc: not in enabled drivers build config 00:15:36.895 net/nfb: not in enabled drivers build config 00:15:36.895 net/nfp: not in enabled drivers build config 00:15:36.895 net/ngbe: not in enabled drivers build config 00:15:36.895 net/null: not in enabled drivers build config 00:15:36.895 net/octeontx: not in enabled drivers build config 00:15:36.895 net/octeon_ep: not in enabled drivers build config 00:15:36.895 net/pcap: not in enabled drivers build config 00:15:36.895 net/pfe: not in enabled drivers build config 00:15:36.895 net/qede: not in enabled drivers build config 00:15:36.895 net/ring: not in enabled drivers build config 00:15:36.895 net/sfc: not in enabled drivers build config 00:15:36.895 net/softnic: not in enabled drivers build config 00:15:36.895 net/tap: not in enabled drivers build config 00:15:36.895 net/thunderx: not in enabled drivers build config 00:15:36.895 net/txgbe: not in enabled drivers build config 00:15:36.895 net/vdev_netvsc: not in enabled drivers build config 00:15:36.895 net/vhost: not in enabled drivers build config 00:15:36.895 net/virtio: not in enabled drivers build config 00:15:36.895 net/vmxnet3: not in enabled drivers build config 00:15:36.895 raw/*: missing internal dependency, "rawdev" 00:15:36.895 crypto/armv8: not in enabled drivers build config 00:15:36.895 crypto/bcmfs: not in enabled drivers build config 00:15:36.895 crypto/caam_jr: not in enabled drivers build config 00:15:36.895 crypto/ccp: not in enabled drivers build config 00:15:36.895 crypto/cnxk: not in enabled drivers build config 00:15:36.895 crypto/dpaa_sec: not in enabled drivers build config 00:15:36.895 crypto/dpaa2_sec: not in enabled drivers build config 00:15:36.895 crypto/ipsec_mb: not in enabled drivers build config 00:15:36.895 crypto/mlx5: not in enabled drivers build config 00:15:36.895 crypto/mvsam: not in enabled drivers build config 00:15:36.895 crypto/nitrox: not in enabled drivers build config 00:15:36.895 crypto/null: not in enabled drivers build config 00:15:36.895 crypto/octeontx: not in enabled drivers build config 00:15:36.895 crypto/openssl: not in enabled drivers build config 00:15:36.895 crypto/scheduler: not in enabled drivers build config 00:15:36.895 crypto/uadk: not in enabled drivers build config 00:15:36.895 crypto/virtio: not in enabled drivers build config 00:15:36.895 compress/isal: not in enabled drivers build config 00:15:36.895 compress/mlx5: not in enabled drivers build config 00:15:36.895 compress/nitrox: not in enabled drivers build config 00:15:36.895 compress/octeontx: not in enabled drivers build config 00:15:36.895 compress/zlib: not in enabled drivers build config 00:15:36.895 regex/*: missing internal dependency, "regexdev" 00:15:36.895 ml/*: missing internal dependency, "mldev" 00:15:36.895 vdpa/ifc: not in enabled drivers build config 00:15:36.895 vdpa/mlx5: not in enabled drivers build config 00:15:36.895 vdpa/nfp: not in enabled drivers build config 00:15:36.895 vdpa/sfc: not in enabled drivers build config 00:15:36.895 event/*: missing internal dependency, "eventdev" 00:15:36.895 baseband/*: missing internal dependency, "bbdev" 00:15:36.895 gpu/*: missing internal dependency, "gpudev" 00:15:36.895 00:15:36.895 00:15:36.895 Build targets in project: 84 00:15:36.895 00:15:36.895 DPDK 24.03.0 00:15:36.895 00:15:36.895 User defined options 00:15:36.895 buildtype : debug 00:15:36.895 default_library : shared 00:15:36.895 libdir : lib 00:15:36.895 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:15:36.895 b_sanitize : address 00:15:36.895 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:15:36.895 c_link_args : 00:15:36.895 cpu_instruction_set: native 00:15:36.895 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:15:36.895 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:15:36.895 enable_docs : false 00:15:36.895 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:15:36.896 enable_kmods : false 00:15:36.896 max_lcores : 128 00:15:36.896 tests : false 00:15:36.896 00:15:36.896 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:15:37.154 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:15:37.154 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:15:37.154 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:15:37.154 [3/267] Linking static target lib/librte_kvargs.a 00:15:37.154 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:15:37.154 [5/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:15:37.154 [6/267] Linking static target lib/librte_log.a 00:15:37.413 [7/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:15:37.413 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:15:37.671 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:15:37.671 [10/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:15:37.671 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:15:37.671 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:15:37.671 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:15:37.671 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:15:37.671 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:15:37.929 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:15:38.187 [17/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:15:38.187 [18/267] Linking static target lib/librte_telemetry.a 00:15:38.187 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:15:38.187 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:15:38.187 [21/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:15:38.187 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:15:38.187 [23/267] Linking target lib/librte_log.so.24.1 00:15:38.187 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:15:38.187 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:15:38.187 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:15:38.445 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:15:38.445 [28/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:15:38.445 [29/267] Linking target lib/librte_kvargs.so.24.1 00:15:38.445 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:15:38.445 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:15:38.703 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:15:38.703 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:15:38.703 [34/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:15:38.703 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:15:38.703 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:15:38.703 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:15:38.703 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:15:38.703 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:15:38.703 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:15:38.703 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:15:38.960 [42/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:15:38.960 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:15:38.960 [44/267] Linking target lib/librte_telemetry.so.24.1 00:15:38.960 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:15:39.218 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:15:39.218 [47/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:15:39.218 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:15:39.218 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:15:39.218 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:15:39.519 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:15:39.519 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:15:39.519 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:15:39.519 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:15:39.519 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:15:39.519 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:15:39.519 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:15:39.519 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:15:39.802 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:15:39.802 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:15:39.802 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:15:39.802 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:15:39.802 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:15:39.802 [64/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:15:39.802 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:15:40.063 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:15:40.063 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:15:40.063 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:15:40.063 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:15:40.063 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:15:40.063 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:15:40.063 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:15:40.323 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:15:40.323 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:15:40.323 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:15:40.323 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:15:40.323 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:15:40.323 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:15:40.581 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:15:40.581 [80/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:15:40.581 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:15:40.581 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:15:40.581 [83/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:15:40.581 [84/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:15:40.581 [85/267] Linking static target lib/librte_ring.a 00:15:40.839 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:15:40.839 [87/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:15:40.839 [88/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:15:40.839 [89/267] Linking static target lib/librte_eal.a 00:15:40.839 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:15:40.839 [91/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:15:40.839 [92/267] Linking static target lib/librte_rcu.a 00:15:40.839 [93/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:15:40.839 [94/267] Linking static target lib/librte_mempool.a 00:15:41.099 [95/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:15:41.099 [96/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:15:41.099 [97/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:15:41.099 [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:15:41.099 [99/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:15:41.360 [100/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:15:41.360 [101/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:15:41.360 [102/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:15:41.360 [103/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:15:41.360 [104/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:15:41.360 [105/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:15:41.360 [106/267] Linking static target lib/librte_mbuf.a 00:15:41.621 [107/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:15:41.621 [108/267] Linking static target lib/librte_meter.a 00:15:41.621 [109/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:15:41.621 [110/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:15:41.621 [111/267] Linking static target lib/librte_net.a 00:15:41.621 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:15:41.880 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:15:41.880 [114/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:15:41.880 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:15:41.880 [116/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:15:42.140 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:15:42.140 [118/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:15:42.140 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:15:42.402 [120/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:15:42.402 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:15:42.402 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:15:42.731 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:15:42.731 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:15:42.731 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:15:42.731 [126/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:15:42.731 [127/267] Linking static target lib/librte_pci.a 00:15:42.731 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:15:42.731 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:15:42.731 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:15:42.731 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:15:42.731 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:15:43.002 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:15:43.002 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:15:43.002 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:15:43.002 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:15:43.002 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:15:43.002 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:15:43.002 [139/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:15:43.002 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:15:43.002 [141/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:15:43.002 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:15:43.002 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:15:43.002 [144/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:15:43.002 [145/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:15:43.002 [146/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:15:43.002 [147/267] Linking static target lib/librte_cmdline.a 00:15:43.274 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:15:43.274 [149/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:15:43.274 [150/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:15:43.534 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:15:43.534 [152/267] Linking static target lib/librte_ethdev.a 00:15:43.534 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:15:43.534 [154/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:15:43.534 [155/267] Linking static target lib/librte_timer.a 00:15:43.534 [156/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:15:43.534 [157/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:15:43.534 [158/267] Linking static target lib/librte_compressdev.a 00:15:43.792 [159/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:15:43.792 [160/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:15:43.792 [161/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:15:44.052 [162/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:15:44.052 [163/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:15:44.052 [164/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:15:44.052 [165/267] Linking static target lib/librte_dmadev.a 00:15:44.052 [166/267] Linking static target lib/librte_hash.a 00:15:44.052 [167/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:15:44.052 [168/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:15:44.052 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:15:44.313 [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:15:44.313 [171/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:15:44.313 [172/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:44.313 [173/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:15:44.574 [174/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:15:44.574 [175/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:15:44.574 [176/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:15:44.574 [177/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:15:44.574 [178/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:44.834 [179/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:15:44.834 [180/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:15:44.834 [181/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:15:44.834 [182/267] Linking static target lib/librte_power.a 00:15:44.834 [183/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:15:45.094 [184/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:15:45.094 [185/267] Linking static target lib/librte_cryptodev.a 00:15:45.094 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:15:45.094 [187/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:15:45.094 [188/267] Linking static target lib/librte_reorder.a 00:15:45.094 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:15:45.355 [190/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:15:45.355 [191/267] Linking static target lib/librte_security.a 00:15:45.355 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:15:45.618 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:15:45.618 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:15:45.878 [195/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:15:45.878 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:15:45.878 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:15:45.878 [198/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:15:45.878 [199/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:15:46.138 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:15:46.138 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:15:46.138 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:15:46.138 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:15:46.138 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:15:46.398 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:15:46.398 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:15:46.398 [207/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:15:46.398 [208/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:15:46.398 [209/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:15:46.686 [210/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:15:46.686 [211/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:15:46.686 [212/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:15:46.686 [213/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:15:46.686 [214/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:15:46.686 [215/267] Linking static target drivers/librte_bus_vdev.a 00:15:46.686 [216/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:15:46.686 [217/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:15:46.686 [218/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:15:46.686 [219/267] Linking static target drivers/librte_bus_pci.a 00:15:46.946 [220/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:15:46.946 [221/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:46.946 [222/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:15:46.946 [223/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:15:46.946 [224/267] Linking static target drivers/librte_mempool_ring.a 00:15:46.946 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:46.946 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:15:47.515 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:15:48.453 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:15:48.453 [229/267] Linking target lib/librte_eal.so.24.1 00:15:48.713 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:15:48.713 [231/267] Linking target lib/librte_pci.so.24.1 00:15:48.713 [232/267] Linking target lib/librte_timer.so.24.1 00:15:48.713 [233/267] Linking target lib/librte_ring.so.24.1 00:15:48.713 [234/267] Linking target drivers/librte_bus_vdev.so.24.1 00:15:48.713 [235/267] Linking target lib/librte_meter.so.24.1 00:15:48.713 [236/267] Linking target lib/librte_dmadev.so.24.1 00:15:48.713 [237/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:15:48.713 [238/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:15:48.713 [239/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:15:48.713 [240/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:15:48.975 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:15:48.975 [242/267] Linking target drivers/librte_bus_pci.so.24.1 00:15:48.975 [243/267] Linking target lib/librte_rcu.so.24.1 00:15:48.975 [244/267] Linking target lib/librte_mempool.so.24.1 00:15:48.975 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:15:48.975 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:15:48.975 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:15:48.975 [248/267] Linking target lib/librte_mbuf.so.24.1 00:15:49.233 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:15:49.233 [250/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:49.233 [251/267] Linking target lib/librte_compressdev.so.24.1 00:15:49.233 [252/267] Linking target lib/librte_net.so.24.1 00:15:49.233 [253/267] Linking target lib/librte_reorder.so.24.1 00:15:49.233 [254/267] Linking target lib/librte_cryptodev.so.24.1 00:15:49.233 [255/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:15:49.233 [256/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:15:49.494 [257/267] Linking target lib/librte_hash.so.24.1 00:15:49.494 [258/267] Linking target lib/librte_cmdline.so.24.1 00:15:49.494 [259/267] Linking target lib/librte_ethdev.so.24.1 00:15:49.494 [260/267] Linking target lib/librte_security.so.24.1 00:15:49.494 [261/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:15:49.494 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:15:49.494 [263/267] Linking target lib/librte_power.so.24.1 00:15:51.475 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:15:51.475 [265/267] Linking static target lib/librte_vhost.a 00:15:52.417 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:15:52.679 [267/267] Linking target lib/librte_vhost.so.24.1 00:15:52.679 INFO: autodetecting backend as ninja 00:15:52.680 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:16:07.669 CC lib/log/log_flags.o 00:16:07.669 CC lib/ut_mock/mock.o 00:16:07.669 CC lib/log/log.o 00:16:07.669 CC lib/log/log_deprecated.o 00:16:07.669 CC lib/ut/ut.o 00:16:07.669 LIB libspdk_ut.a 00:16:07.669 SO libspdk_ut.so.2.0 00:16:07.669 LIB libspdk_ut_mock.a 00:16:07.669 LIB libspdk_log.a 00:16:07.669 SO libspdk_ut_mock.so.6.0 00:16:07.669 SO libspdk_log.so.7.1 00:16:07.669 SYMLINK libspdk_ut.so 00:16:07.669 SYMLINK libspdk_ut_mock.so 00:16:07.669 SYMLINK libspdk_log.so 00:16:07.669 CC lib/ioat/ioat.o 00:16:07.669 CC lib/dma/dma.o 00:16:07.669 CXX lib/trace_parser/trace.o 00:16:07.669 CC lib/util/bit_array.o 00:16:07.669 CC lib/util/base64.o 00:16:07.669 CC lib/util/crc32.o 00:16:07.669 CC lib/util/cpuset.o 00:16:07.669 CC lib/util/crc16.o 00:16:07.669 CC lib/util/crc32c.o 00:16:07.669 CC lib/vfio_user/host/vfio_user_pci.o 00:16:07.669 CC lib/util/crc32_ieee.o 00:16:07.669 CC lib/vfio_user/host/vfio_user.o 00:16:07.669 CC lib/util/crc64.o 00:16:07.669 CC lib/util/dif.o 00:16:07.669 LIB libspdk_dma.a 00:16:07.669 CC lib/util/fd.o 00:16:07.669 SO libspdk_dma.so.5.0 00:16:07.669 CC lib/util/fd_group.o 00:16:07.669 CC lib/util/file.o 00:16:07.669 CC lib/util/hexlify.o 00:16:07.669 LIB libspdk_ioat.a 00:16:07.669 SYMLINK libspdk_dma.so 00:16:07.669 CC lib/util/iov.o 00:16:07.669 SO libspdk_ioat.so.7.0 00:16:07.669 CC lib/util/math.o 00:16:07.669 CC lib/util/net.o 00:16:07.669 LIB libspdk_vfio_user.a 00:16:07.669 SYMLINK libspdk_ioat.so 00:16:07.669 CC lib/util/pipe.o 00:16:07.670 SO libspdk_vfio_user.so.5.0 00:16:07.670 CC lib/util/strerror_tls.o 00:16:07.670 CC lib/util/string.o 00:16:07.670 SYMLINK libspdk_vfio_user.so 00:16:07.670 CC lib/util/uuid.o 00:16:07.670 CC lib/util/xor.o 00:16:07.670 CC lib/util/zipf.o 00:16:07.670 CC lib/util/md5.o 00:16:07.670 LIB libspdk_util.a 00:16:07.670 LIB libspdk_trace_parser.a 00:16:07.670 SO libspdk_util.so.10.1 00:16:07.670 SO libspdk_trace_parser.so.6.0 00:16:07.670 SYMLINK libspdk_util.so 00:16:07.670 SYMLINK libspdk_trace_parser.so 00:16:07.670 CC lib/conf/conf.o 00:16:07.670 CC lib/idxd/idxd.o 00:16:07.670 CC lib/idxd/idxd_kernel.o 00:16:07.670 CC lib/idxd/idxd_user.o 00:16:07.670 CC lib/vmd/vmd.o 00:16:07.670 CC lib/vmd/led.o 00:16:07.670 CC lib/rdma_utils/rdma_utils.o 00:16:07.670 CC lib/env_dpdk/env.o 00:16:07.670 CC lib/env_dpdk/memory.o 00:16:07.670 CC lib/json/json_parse.o 00:16:07.670 CC lib/json/json_util.o 00:16:07.670 CC lib/json/json_write.o 00:16:07.942 LIB libspdk_conf.a 00:16:07.942 CC lib/env_dpdk/pci.o 00:16:07.942 SO libspdk_conf.so.6.0 00:16:07.942 CC lib/env_dpdk/init.o 00:16:07.942 LIB libspdk_rdma_utils.a 00:16:07.942 SYMLINK libspdk_conf.so 00:16:07.942 SO libspdk_rdma_utils.so.1.0 00:16:07.942 CC lib/env_dpdk/threads.o 00:16:07.942 SYMLINK libspdk_rdma_utils.so 00:16:07.942 CC lib/env_dpdk/pci_ioat.o 00:16:07.942 CC lib/env_dpdk/pci_virtio.o 00:16:07.942 CC lib/env_dpdk/pci_vmd.o 00:16:07.942 LIB libspdk_json.a 00:16:07.942 SO libspdk_json.so.6.0 00:16:07.942 CC lib/env_dpdk/pci_idxd.o 00:16:07.942 CC lib/env_dpdk/pci_event.o 00:16:07.942 SYMLINK libspdk_json.so 00:16:08.200 CC lib/env_dpdk/sigbus_handler.o 00:16:08.200 CC lib/env_dpdk/pci_dpdk.o 00:16:08.200 LIB libspdk_idxd.a 00:16:08.200 CC lib/env_dpdk/pci_dpdk_2207.o 00:16:08.200 CC lib/env_dpdk/pci_dpdk_2211.o 00:16:08.200 SO libspdk_idxd.so.12.1 00:16:08.200 CC lib/rdma_provider/common.o 00:16:08.200 CC lib/rdma_provider/rdma_provider_verbs.o 00:16:08.200 SYMLINK libspdk_idxd.so 00:16:08.200 LIB libspdk_vmd.a 00:16:08.200 SO libspdk_vmd.so.6.0 00:16:08.200 CC lib/jsonrpc/jsonrpc_server.o 00:16:08.457 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:16:08.457 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:16:08.457 CC lib/jsonrpc/jsonrpc_client.o 00:16:08.457 SYMLINK libspdk_vmd.so 00:16:08.457 LIB libspdk_rdma_provider.a 00:16:08.457 SO libspdk_rdma_provider.so.7.0 00:16:08.457 SYMLINK libspdk_rdma_provider.so 00:16:08.457 LIB libspdk_jsonrpc.a 00:16:08.715 SO libspdk_jsonrpc.so.6.0 00:16:08.715 SYMLINK libspdk_jsonrpc.so 00:16:08.973 CC lib/rpc/rpc.o 00:16:08.973 LIB libspdk_env_dpdk.a 00:16:08.973 SO libspdk_env_dpdk.so.15.1 00:16:09.230 LIB libspdk_rpc.a 00:16:09.230 SO libspdk_rpc.so.6.0 00:16:09.230 SYMLINK libspdk_env_dpdk.so 00:16:09.230 SYMLINK libspdk_rpc.so 00:16:09.488 CC lib/keyring/keyring_rpc.o 00:16:09.488 CC lib/keyring/keyring.o 00:16:09.488 CC lib/trace/trace.o 00:16:09.488 CC lib/trace/trace_flags.o 00:16:09.488 CC lib/trace/trace_rpc.o 00:16:09.488 CC lib/notify/notify.o 00:16:09.488 CC lib/notify/notify_rpc.o 00:16:09.488 LIB libspdk_notify.a 00:16:09.488 SO libspdk_notify.so.6.0 00:16:09.488 LIB libspdk_keyring.a 00:16:09.488 SYMLINK libspdk_notify.so 00:16:09.488 LIB libspdk_trace.a 00:16:09.488 SO libspdk_keyring.so.2.0 00:16:09.746 SO libspdk_trace.so.11.0 00:16:09.746 SYMLINK libspdk_keyring.so 00:16:09.746 SYMLINK libspdk_trace.so 00:16:10.003 CC lib/thread/thread.o 00:16:10.003 CC lib/thread/iobuf.o 00:16:10.003 CC lib/sock/sock_rpc.o 00:16:10.003 CC lib/sock/sock.o 00:16:10.261 LIB libspdk_sock.a 00:16:10.261 SO libspdk_sock.so.10.0 00:16:10.519 SYMLINK libspdk_sock.so 00:16:10.519 CC lib/nvme/nvme_ns_cmd.o 00:16:10.519 CC lib/nvme/nvme_ctrlr_cmd.o 00:16:10.519 CC lib/nvme/nvme_ns.o 00:16:10.519 CC lib/nvme/nvme_fabric.o 00:16:10.519 CC lib/nvme/nvme_ctrlr.o 00:16:10.519 CC lib/nvme/nvme_pcie_common.o 00:16:10.519 CC lib/nvme/nvme.o 00:16:10.519 CC lib/nvme/nvme_pcie.o 00:16:10.519 CC lib/nvme/nvme_qpair.o 00:16:11.084 CC lib/nvme/nvme_quirks.o 00:16:11.084 CC lib/nvme/nvme_transport.o 00:16:11.343 CC lib/nvme/nvme_discovery.o 00:16:11.343 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:16:11.343 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:16:11.343 LIB libspdk_thread.a 00:16:11.343 CC lib/nvme/nvme_tcp.o 00:16:11.343 CC lib/nvme/nvme_opal.o 00:16:11.343 SO libspdk_thread.so.11.0 00:16:11.601 CC lib/nvme/nvme_io_msg.o 00:16:11.601 SYMLINK libspdk_thread.so 00:16:11.601 CC lib/nvme/nvme_poll_group.o 00:16:11.601 CC lib/nvme/nvme_zns.o 00:16:11.859 CC lib/nvme/nvme_stubs.o 00:16:11.859 CC lib/nvme/nvme_auth.o 00:16:11.859 CC lib/nvme/nvme_cuse.o 00:16:11.859 CC lib/nvme/nvme_rdma.o 00:16:12.116 CC lib/accel/accel.o 00:16:12.116 CC lib/blob/blobstore.o 00:16:12.116 CC lib/blob/request.o 00:16:12.116 CC lib/blob/zeroes.o 00:16:12.116 CC lib/accel/accel_rpc.o 00:16:12.374 CC lib/blob/blob_bs_dev.o 00:16:12.374 CC lib/accel/accel_sw.o 00:16:12.633 CC lib/init/json_config.o 00:16:12.633 CC lib/virtio/virtio.o 00:16:12.633 CC lib/virtio/virtio_vhost_user.o 00:16:12.633 CC lib/init/subsystem.o 00:16:12.633 CC lib/init/subsystem_rpc.o 00:16:12.633 CC lib/init/rpc.o 00:16:12.633 CC lib/virtio/virtio_vfio_user.o 00:16:12.891 CC lib/virtio/virtio_pci.o 00:16:12.891 LIB libspdk_init.a 00:16:12.891 SO libspdk_init.so.6.0 00:16:12.891 CC lib/fsdev/fsdev.o 00:16:12.891 CC lib/fsdev/fsdev_io.o 00:16:12.891 CC lib/fsdev/fsdev_rpc.o 00:16:12.891 SYMLINK libspdk_init.so 00:16:12.891 LIB libspdk_nvme.a 00:16:12.891 LIB libspdk_virtio.a 00:16:13.148 SO libspdk_virtio.so.7.0 00:16:13.148 CC lib/event/app.o 00:16:13.148 CC lib/event/reactor.o 00:16:13.148 CC lib/event/log_rpc.o 00:16:13.148 CC lib/event/app_rpc.o 00:16:13.148 SYMLINK libspdk_virtio.so 00:16:13.148 LIB libspdk_accel.a 00:16:13.148 CC lib/event/scheduler_static.o 00:16:13.148 SO libspdk_nvme.so.15.0 00:16:13.148 SO libspdk_accel.so.16.0 00:16:13.148 SYMLINK libspdk_accel.so 00:16:13.406 SYMLINK libspdk_nvme.so 00:16:13.406 CC lib/bdev/bdev.o 00:16:13.406 CC lib/bdev/part.o 00:16:13.406 CC lib/bdev/bdev_rpc.o 00:16:13.406 CC lib/bdev/bdev_zone.o 00:16:13.406 CC lib/bdev/scsi_nvme.o 00:16:13.406 LIB libspdk_fsdev.a 00:16:13.406 LIB libspdk_event.a 00:16:13.663 SO libspdk_fsdev.so.2.0 00:16:13.663 SO libspdk_event.so.14.0 00:16:13.663 SYMLINK libspdk_event.so 00:16:13.663 SYMLINK libspdk_fsdev.so 00:16:13.921 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:16:14.486 LIB libspdk_fuse_dispatcher.a 00:16:14.486 SO libspdk_fuse_dispatcher.so.1.0 00:16:14.744 SYMLINK libspdk_fuse_dispatcher.so 00:16:15.400 LIB libspdk_blob.a 00:16:15.400 SO libspdk_blob.so.12.0 00:16:15.400 SYMLINK libspdk_blob.so 00:16:15.657 CC lib/blobfs/blobfs.o 00:16:15.657 CC lib/blobfs/tree.o 00:16:15.657 CC lib/lvol/lvol.o 00:16:15.657 LIB libspdk_bdev.a 00:16:15.914 SO libspdk_bdev.so.17.0 00:16:15.914 SYMLINK libspdk_bdev.so 00:16:16.173 CC lib/scsi/dev.o 00:16:16.173 CC lib/ublk/ublk.o 00:16:16.173 CC lib/scsi/lun.o 00:16:16.173 CC lib/scsi/port.o 00:16:16.173 CC lib/ublk/ublk_rpc.o 00:16:16.173 CC lib/nvmf/ctrlr.o 00:16:16.173 CC lib/ftl/ftl_core.o 00:16:16.173 CC lib/nbd/nbd.o 00:16:16.173 CC lib/scsi/scsi.o 00:16:16.173 CC lib/scsi/scsi_bdev.o 00:16:16.173 LIB libspdk_blobfs.a 00:16:16.173 SO libspdk_blobfs.so.11.0 00:16:16.431 SYMLINK libspdk_blobfs.so 00:16:16.431 CC lib/scsi/scsi_pr.o 00:16:16.431 CC lib/nvmf/ctrlr_discovery.o 00:16:16.431 CC lib/nvmf/ctrlr_bdev.o 00:16:16.431 CC lib/nbd/nbd_rpc.o 00:16:16.431 LIB libspdk_lvol.a 00:16:16.690 CC lib/nvmf/subsystem.o 00:16:16.690 SO libspdk_lvol.so.11.0 00:16:16.690 CC lib/ftl/ftl_init.o 00:16:16.690 LIB libspdk_nbd.a 00:16:16.690 CC lib/ftl/ftl_layout.o 00:16:16.690 SO libspdk_nbd.so.7.0 00:16:16.690 SYMLINK libspdk_lvol.so 00:16:16.690 CC lib/ftl/ftl_debug.o 00:16:16.690 SYMLINK libspdk_nbd.so 00:16:16.690 CC lib/ftl/ftl_io.o 00:16:16.690 CC lib/scsi/scsi_rpc.o 00:16:16.690 CC lib/ftl/ftl_sb.o 00:16:16.690 LIB libspdk_ublk.a 00:16:16.948 SO libspdk_ublk.so.3.0 00:16:16.948 CC lib/nvmf/nvmf.o 00:16:16.948 CC lib/scsi/task.o 00:16:16.948 CC lib/ftl/ftl_l2p.o 00:16:16.948 CC lib/ftl/ftl_l2p_flat.o 00:16:16.948 SYMLINK libspdk_ublk.so 00:16:16.948 CC lib/ftl/ftl_nv_cache.o 00:16:16.948 CC lib/ftl/ftl_band.o 00:16:16.948 CC lib/ftl/ftl_band_ops.o 00:16:16.948 CC lib/ftl/ftl_writer.o 00:16:16.948 LIB libspdk_scsi.a 00:16:16.948 CC lib/ftl/ftl_rq.o 00:16:17.206 CC lib/nvmf/nvmf_rpc.o 00:16:17.206 SO libspdk_scsi.so.9.0 00:16:17.206 SYMLINK libspdk_scsi.so 00:16:17.206 CC lib/ftl/ftl_reloc.o 00:16:17.206 CC lib/ftl/ftl_l2p_cache.o 00:16:17.464 CC lib/ftl/ftl_p2l.o 00:16:17.464 CC lib/iscsi/conn.o 00:16:17.464 CC lib/vhost/vhost.o 00:16:17.464 CC lib/vhost/vhost_rpc.o 00:16:17.720 CC lib/iscsi/init_grp.o 00:16:17.720 CC lib/iscsi/iscsi.o 00:16:17.720 CC lib/iscsi/param.o 00:16:17.720 CC lib/iscsi/portal_grp.o 00:16:17.720 CC lib/iscsi/tgt_node.o 00:16:17.720 CC lib/ftl/ftl_p2l_log.o 00:16:17.977 CC lib/iscsi/iscsi_subsystem.o 00:16:17.977 CC lib/nvmf/transport.o 00:16:17.977 CC lib/nvmf/tcp.o 00:16:17.977 CC lib/ftl/mngt/ftl_mngt.o 00:16:17.977 CC lib/iscsi/iscsi_rpc.o 00:16:18.305 CC lib/iscsi/task.o 00:16:18.305 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:16:18.305 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:16:18.305 CC lib/vhost/vhost_scsi.o 00:16:18.562 CC lib/ftl/mngt/ftl_mngt_startup.o 00:16:18.562 CC lib/ftl/mngt/ftl_mngt_md.o 00:16:18.562 CC lib/ftl/mngt/ftl_mngt_misc.o 00:16:18.562 CC lib/nvmf/stubs.o 00:16:18.562 CC lib/nvmf/mdns_server.o 00:16:18.562 CC lib/vhost/vhost_blk.o 00:16:18.562 CC lib/vhost/rte_vhost_user.o 00:16:18.562 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:16:18.562 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:16:18.562 CC lib/nvmf/rdma.o 00:16:18.820 CC lib/ftl/mngt/ftl_mngt_band.o 00:16:18.820 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:16:18.820 CC lib/nvmf/auth.o 00:16:18.820 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:16:18.820 LIB libspdk_iscsi.a 00:16:19.079 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:16:19.079 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:16:19.079 SO libspdk_iscsi.so.8.0 00:16:19.079 CC lib/ftl/utils/ftl_conf.o 00:16:19.079 CC lib/ftl/utils/ftl_md.o 00:16:19.079 SYMLINK libspdk_iscsi.so 00:16:19.079 CC lib/ftl/utils/ftl_mempool.o 00:16:19.079 CC lib/ftl/utils/ftl_bitmap.o 00:16:19.336 CC lib/ftl/utils/ftl_property.o 00:16:19.336 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:16:19.336 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:16:19.336 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:16:19.336 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:16:19.336 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:16:19.336 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:16:19.336 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:16:19.336 LIB libspdk_vhost.a 00:16:19.594 CC lib/ftl/upgrade/ftl_sb_v3.o 00:16:19.594 CC lib/ftl/upgrade/ftl_sb_v5.o 00:16:19.594 SO libspdk_vhost.so.8.0 00:16:19.594 CC lib/ftl/nvc/ftl_nvc_dev.o 00:16:19.594 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:16:19.594 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:16:19.594 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:16:19.594 SYMLINK libspdk_vhost.so 00:16:19.594 CC lib/ftl/base/ftl_base_dev.o 00:16:19.594 CC lib/ftl/base/ftl_base_bdev.o 00:16:19.594 CC lib/ftl/ftl_trace.o 00:16:19.853 LIB libspdk_ftl.a 00:16:20.113 SO libspdk_ftl.so.9.0 00:16:20.372 SYMLINK libspdk_ftl.so 00:16:20.629 LIB libspdk_nvmf.a 00:16:20.629 SO libspdk_nvmf.so.20.0 00:16:20.886 SYMLINK libspdk_nvmf.so 00:16:21.143 CC module/env_dpdk/env_dpdk_rpc.o 00:16:21.401 CC module/accel/ioat/accel_ioat.o 00:16:21.401 CC module/keyring/file/keyring.o 00:16:21.401 CC module/blob/bdev/blob_bdev.o 00:16:21.401 CC module/sock/posix/posix.o 00:16:21.401 CC module/scheduler/dynamic/scheduler_dynamic.o 00:16:21.401 CC module/accel/error/accel_error.o 00:16:21.401 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:16:21.401 CC module/fsdev/aio/fsdev_aio.o 00:16:21.401 CC module/scheduler/gscheduler/gscheduler.o 00:16:21.401 LIB libspdk_env_dpdk_rpc.a 00:16:21.401 CC module/keyring/file/keyring_rpc.o 00:16:21.401 SO libspdk_env_dpdk_rpc.so.6.0 00:16:21.401 LIB libspdk_scheduler_dpdk_governor.a 00:16:21.401 SO libspdk_scheduler_dpdk_governor.so.4.0 00:16:21.401 SYMLINK libspdk_env_dpdk_rpc.so 00:16:21.401 CC module/accel/ioat/accel_ioat_rpc.o 00:16:21.401 LIB libspdk_scheduler_gscheduler.a 00:16:21.401 LIB libspdk_scheduler_dynamic.a 00:16:21.401 LIB libspdk_keyring_file.a 00:16:21.401 SYMLINK libspdk_scheduler_dpdk_governor.so 00:16:21.401 SO libspdk_scheduler_gscheduler.so.4.0 00:16:21.401 CC module/accel/error/accel_error_rpc.o 00:16:21.401 SO libspdk_scheduler_dynamic.so.4.0 00:16:21.659 SO libspdk_keyring_file.so.2.0 00:16:21.659 SYMLINK libspdk_scheduler_gscheduler.so 00:16:21.659 SYMLINK libspdk_scheduler_dynamic.so 00:16:21.659 LIB libspdk_blob_bdev.a 00:16:21.659 CC module/fsdev/aio/fsdev_aio_rpc.o 00:16:21.659 SYMLINK libspdk_keyring_file.so 00:16:21.659 CC module/fsdev/aio/linux_aio_mgr.o 00:16:21.659 LIB libspdk_accel_ioat.a 00:16:21.659 SO libspdk_blob_bdev.so.12.0 00:16:21.659 SO libspdk_accel_ioat.so.6.0 00:16:21.659 CC module/accel/dsa/accel_dsa.o 00:16:21.659 LIB libspdk_accel_error.a 00:16:21.659 SYMLINK libspdk_blob_bdev.so 00:16:21.659 CC module/accel/iaa/accel_iaa.o 00:16:21.659 CC module/accel/iaa/accel_iaa_rpc.o 00:16:21.659 SO libspdk_accel_error.so.2.0 00:16:21.659 SYMLINK libspdk_accel_ioat.so 00:16:21.659 CC module/accel/dsa/accel_dsa_rpc.o 00:16:21.659 SYMLINK libspdk_accel_error.so 00:16:21.659 CC module/keyring/linux/keyring.o 00:16:21.659 CC module/keyring/linux/keyring_rpc.o 00:16:21.918 LIB libspdk_accel_iaa.a 00:16:21.918 SO libspdk_accel_iaa.so.3.0 00:16:21.918 LIB libspdk_keyring_linux.a 00:16:21.918 CC module/bdev/error/vbdev_error.o 00:16:21.918 SYMLINK libspdk_accel_iaa.so 00:16:21.918 CC module/bdev/error/vbdev_error_rpc.o 00:16:21.918 SO libspdk_keyring_linux.so.1.0 00:16:21.918 CC module/bdev/delay/vbdev_delay.o 00:16:21.918 CC module/blobfs/bdev/blobfs_bdev.o 00:16:21.918 CC module/bdev/gpt/gpt.o 00:16:21.918 LIB libspdk_accel_dsa.a 00:16:21.918 SO libspdk_accel_dsa.so.5.0 00:16:21.918 SYMLINK libspdk_keyring_linux.so 00:16:21.918 LIB libspdk_fsdev_aio.a 00:16:21.918 CC module/bdev/lvol/vbdev_lvol.o 00:16:21.918 CC module/bdev/gpt/vbdev_gpt.o 00:16:22.175 SO libspdk_fsdev_aio.so.1.0 00:16:22.175 SYMLINK libspdk_accel_dsa.so 00:16:22.175 CC module/bdev/delay/vbdev_delay_rpc.o 00:16:22.175 SYMLINK libspdk_fsdev_aio.so 00:16:22.175 LIB libspdk_sock_posix.a 00:16:22.175 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:16:22.175 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:16:22.175 SO libspdk_sock_posix.so.6.0 00:16:22.175 SYMLINK libspdk_sock_posix.so 00:16:22.175 LIB libspdk_bdev_error.a 00:16:22.175 SO libspdk_bdev_error.so.6.0 00:16:22.175 LIB libspdk_blobfs_bdev.a 00:16:22.175 CC module/bdev/malloc/bdev_malloc.o 00:16:22.175 LIB libspdk_bdev_gpt.a 00:16:22.175 SO libspdk_blobfs_bdev.so.6.0 00:16:22.433 LIB libspdk_bdev_delay.a 00:16:22.433 SO libspdk_bdev_gpt.so.6.0 00:16:22.433 CC module/bdev/null/bdev_null.o 00:16:22.433 SO libspdk_bdev_delay.so.6.0 00:16:22.433 SYMLINK libspdk_bdev_error.so 00:16:22.433 CC module/bdev/nvme/bdev_nvme.o 00:16:22.433 CC module/bdev/passthru/vbdev_passthru.o 00:16:22.433 SYMLINK libspdk_blobfs_bdev.so 00:16:22.433 CC module/bdev/nvme/bdev_nvme_rpc.o 00:16:22.433 SYMLINK libspdk_bdev_gpt.so 00:16:22.433 CC module/bdev/nvme/nvme_rpc.o 00:16:22.433 SYMLINK libspdk_bdev_delay.so 00:16:22.433 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:16:22.433 CC module/bdev/malloc/bdev_malloc_rpc.o 00:16:22.433 CC module/bdev/raid/bdev_raid.o 00:16:22.692 CC module/bdev/null/bdev_null_rpc.o 00:16:22.692 LIB libspdk_bdev_lvol.a 00:16:22.692 CC module/bdev/nvme/bdev_mdns_client.o 00:16:22.692 LIB libspdk_bdev_passthru.a 00:16:22.692 LIB libspdk_bdev_malloc.a 00:16:22.692 SO libspdk_bdev_lvol.so.6.0 00:16:22.692 SO libspdk_bdev_passthru.so.6.0 00:16:22.692 SO libspdk_bdev_malloc.so.6.0 00:16:22.692 CC module/bdev/split/vbdev_split.o 00:16:22.692 CC module/bdev/zone_block/vbdev_zone_block.o 00:16:22.692 SYMLINK libspdk_bdev_lvol.so 00:16:22.692 CC module/bdev/nvme/vbdev_opal.o 00:16:22.692 SYMLINK libspdk_bdev_passthru.so 00:16:22.692 LIB libspdk_bdev_null.a 00:16:22.692 CC module/bdev/nvme/vbdev_opal_rpc.o 00:16:22.692 SYMLINK libspdk_bdev_malloc.so 00:16:22.692 CC module/bdev/split/vbdev_split_rpc.o 00:16:22.692 SO libspdk_bdev_null.so.6.0 00:16:22.692 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:16:22.951 SYMLINK libspdk_bdev_null.so 00:16:22.951 LIB libspdk_bdev_split.a 00:16:22.951 CC module/bdev/raid/bdev_raid_rpc.o 00:16:22.951 SO libspdk_bdev_split.so.6.0 00:16:22.951 CC module/bdev/raid/bdev_raid_sb.o 00:16:22.951 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:16:22.951 CC module/bdev/aio/bdev_aio.o 00:16:22.951 SYMLINK libspdk_bdev_split.so 00:16:22.951 CC module/bdev/aio/bdev_aio_rpc.o 00:16:22.951 CC module/bdev/ftl/bdev_ftl.o 00:16:22.951 CC module/bdev/ftl/bdev_ftl_rpc.o 00:16:22.951 CC module/bdev/raid/raid0.o 00:16:23.208 CC module/bdev/raid/raid1.o 00:16:23.208 LIB libspdk_bdev_zone_block.a 00:16:23.208 SO libspdk_bdev_zone_block.so.6.0 00:16:23.208 SYMLINK libspdk_bdev_zone_block.so 00:16:23.208 CC module/bdev/raid/concat.o 00:16:23.208 CC module/bdev/raid/raid5f.o 00:16:23.208 LIB libspdk_bdev_aio.a 00:16:23.208 LIB libspdk_bdev_ftl.a 00:16:23.208 CC module/bdev/iscsi/bdev_iscsi.o 00:16:23.208 SO libspdk_bdev_aio.so.6.0 00:16:23.208 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:16:23.208 SO libspdk_bdev_ftl.so.6.0 00:16:23.208 CC module/bdev/virtio/bdev_virtio_scsi.o 00:16:23.208 CC module/bdev/virtio/bdev_virtio_blk.o 00:16:23.208 SYMLINK libspdk_bdev_aio.so 00:16:23.465 CC module/bdev/virtio/bdev_virtio_rpc.o 00:16:23.465 SYMLINK libspdk_bdev_ftl.so 00:16:23.724 LIB libspdk_bdev_iscsi.a 00:16:23.724 SO libspdk_bdev_iscsi.so.6.0 00:16:23.724 SYMLINK libspdk_bdev_iscsi.so 00:16:23.724 LIB libspdk_bdev_raid.a 00:16:23.724 SO libspdk_bdev_raid.so.6.0 00:16:23.724 LIB libspdk_bdev_virtio.a 00:16:23.724 SO libspdk_bdev_virtio.so.6.0 00:16:23.981 SYMLINK libspdk_bdev_raid.so 00:16:23.981 SYMLINK libspdk_bdev_virtio.so 00:16:24.918 LIB libspdk_bdev_nvme.a 00:16:25.175 SO libspdk_bdev_nvme.so.7.1 00:16:25.175 SYMLINK libspdk_bdev_nvme.so 00:16:25.741 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:16:25.741 CC module/event/subsystems/vmd/vmd.o 00:16:25.741 CC module/event/subsystems/vmd/vmd_rpc.o 00:16:25.741 CC module/event/subsystems/scheduler/scheduler.o 00:16:25.741 CC module/event/subsystems/sock/sock.o 00:16:25.741 CC module/event/subsystems/fsdev/fsdev.o 00:16:25.741 CC module/event/subsystems/keyring/keyring.o 00:16:25.741 CC module/event/subsystems/iobuf/iobuf.o 00:16:25.741 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:16:25.741 LIB libspdk_event_vhost_blk.a 00:16:25.741 LIB libspdk_event_fsdev.a 00:16:25.741 LIB libspdk_event_keyring.a 00:16:25.741 LIB libspdk_event_sock.a 00:16:25.741 LIB libspdk_event_vmd.a 00:16:25.741 SO libspdk_event_vhost_blk.so.3.0 00:16:25.741 LIB libspdk_event_iobuf.a 00:16:25.741 LIB libspdk_event_scheduler.a 00:16:25.741 SO libspdk_event_fsdev.so.1.0 00:16:25.741 SO libspdk_event_keyring.so.1.0 00:16:25.741 SO libspdk_event_sock.so.5.0 00:16:25.741 SO libspdk_event_vmd.so.6.0 00:16:25.741 SO libspdk_event_scheduler.so.4.0 00:16:25.741 SO libspdk_event_iobuf.so.3.0 00:16:25.741 SYMLINK libspdk_event_vhost_blk.so 00:16:25.741 SYMLINK libspdk_event_fsdev.so 00:16:25.741 SYMLINK libspdk_event_keyring.so 00:16:25.741 SYMLINK libspdk_event_sock.so 00:16:25.741 SYMLINK libspdk_event_scheduler.so 00:16:25.741 SYMLINK libspdk_event_iobuf.so 00:16:25.741 SYMLINK libspdk_event_vmd.so 00:16:25.999 CC module/event/subsystems/accel/accel.o 00:16:26.258 LIB libspdk_event_accel.a 00:16:26.258 SO libspdk_event_accel.so.6.0 00:16:26.258 SYMLINK libspdk_event_accel.so 00:16:26.515 CC module/event/subsystems/bdev/bdev.o 00:16:26.515 LIB libspdk_event_bdev.a 00:16:26.515 SO libspdk_event_bdev.so.6.0 00:16:26.515 SYMLINK libspdk_event_bdev.so 00:16:26.791 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:16:26.791 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:16:26.791 CC module/event/subsystems/scsi/scsi.o 00:16:26.791 CC module/event/subsystems/ublk/ublk.o 00:16:26.791 CC module/event/subsystems/nbd/nbd.o 00:16:26.791 LIB libspdk_event_ublk.a 00:16:27.049 LIB libspdk_event_nbd.a 00:16:27.049 LIB libspdk_event_scsi.a 00:16:27.049 SO libspdk_event_ublk.so.3.0 00:16:27.049 SO libspdk_event_nbd.so.6.0 00:16:27.049 SO libspdk_event_scsi.so.6.0 00:16:27.049 SYMLINK libspdk_event_ublk.so 00:16:27.049 LIB libspdk_event_nvmf.a 00:16:27.049 SYMLINK libspdk_event_nbd.so 00:16:27.049 SYMLINK libspdk_event_scsi.so 00:16:27.049 SO libspdk_event_nvmf.so.6.0 00:16:27.049 SYMLINK libspdk_event_nvmf.so 00:16:27.049 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:16:27.307 CC module/event/subsystems/iscsi/iscsi.o 00:16:27.307 LIB libspdk_event_vhost_scsi.a 00:16:27.307 SO libspdk_event_vhost_scsi.so.3.0 00:16:27.307 LIB libspdk_event_iscsi.a 00:16:27.307 SYMLINK libspdk_event_vhost_scsi.so 00:16:27.307 SO libspdk_event_iscsi.so.6.0 00:16:27.564 SYMLINK libspdk_event_iscsi.so 00:16:27.564 SO libspdk.so.6.0 00:16:27.564 SYMLINK libspdk.so 00:16:27.822 CC test/rpc_client/rpc_client_test.o 00:16:27.822 CXX app/trace/trace.o 00:16:27.822 TEST_HEADER include/spdk/accel.h 00:16:27.822 TEST_HEADER include/spdk/accel_module.h 00:16:27.822 TEST_HEADER include/spdk/assert.h 00:16:27.822 CC app/trace_record/trace_record.o 00:16:27.822 TEST_HEADER include/spdk/barrier.h 00:16:27.822 TEST_HEADER include/spdk/base64.h 00:16:27.822 TEST_HEADER include/spdk/bdev.h 00:16:27.822 TEST_HEADER include/spdk/bdev_module.h 00:16:27.822 TEST_HEADER include/spdk/bdev_zone.h 00:16:27.822 TEST_HEADER include/spdk/bit_array.h 00:16:27.822 TEST_HEADER include/spdk/bit_pool.h 00:16:27.822 TEST_HEADER include/spdk/blob_bdev.h 00:16:27.822 TEST_HEADER include/spdk/blobfs_bdev.h 00:16:27.822 TEST_HEADER include/spdk/blobfs.h 00:16:27.822 TEST_HEADER include/spdk/blob.h 00:16:27.822 TEST_HEADER include/spdk/conf.h 00:16:27.822 TEST_HEADER include/spdk/config.h 00:16:27.822 TEST_HEADER include/spdk/cpuset.h 00:16:27.822 TEST_HEADER include/spdk/crc16.h 00:16:27.822 TEST_HEADER include/spdk/crc32.h 00:16:27.822 TEST_HEADER include/spdk/crc64.h 00:16:27.822 TEST_HEADER include/spdk/dif.h 00:16:27.822 TEST_HEADER include/spdk/dma.h 00:16:27.822 TEST_HEADER include/spdk/endian.h 00:16:27.822 TEST_HEADER include/spdk/env_dpdk.h 00:16:27.822 CC app/nvmf_tgt/nvmf_main.o 00:16:27.822 TEST_HEADER include/spdk/env.h 00:16:27.822 TEST_HEADER include/spdk/event.h 00:16:27.822 TEST_HEADER include/spdk/fd_group.h 00:16:27.822 TEST_HEADER include/spdk/fd.h 00:16:27.822 TEST_HEADER include/spdk/file.h 00:16:27.822 TEST_HEADER include/spdk/fsdev.h 00:16:27.822 TEST_HEADER include/spdk/fsdev_module.h 00:16:27.822 TEST_HEADER include/spdk/ftl.h 00:16:27.822 TEST_HEADER include/spdk/fuse_dispatcher.h 00:16:27.822 TEST_HEADER include/spdk/gpt_spec.h 00:16:27.822 CC test/thread/poller_perf/poller_perf.o 00:16:27.822 TEST_HEADER include/spdk/hexlify.h 00:16:27.822 TEST_HEADER include/spdk/histogram_data.h 00:16:27.822 TEST_HEADER include/spdk/idxd.h 00:16:27.822 TEST_HEADER include/spdk/idxd_spec.h 00:16:27.822 TEST_HEADER include/spdk/init.h 00:16:27.822 TEST_HEADER include/spdk/ioat.h 00:16:27.822 TEST_HEADER include/spdk/ioat_spec.h 00:16:27.822 TEST_HEADER include/spdk/iscsi_spec.h 00:16:27.822 TEST_HEADER include/spdk/json.h 00:16:27.822 TEST_HEADER include/spdk/jsonrpc.h 00:16:27.822 CC examples/util/zipf/zipf.o 00:16:27.822 TEST_HEADER include/spdk/keyring.h 00:16:27.822 TEST_HEADER include/spdk/keyring_module.h 00:16:27.822 TEST_HEADER include/spdk/likely.h 00:16:27.822 TEST_HEADER include/spdk/log.h 00:16:27.822 TEST_HEADER include/spdk/lvol.h 00:16:27.822 TEST_HEADER include/spdk/md5.h 00:16:27.822 TEST_HEADER include/spdk/memory.h 00:16:27.822 TEST_HEADER include/spdk/mmio.h 00:16:27.822 TEST_HEADER include/spdk/nbd.h 00:16:27.822 TEST_HEADER include/spdk/net.h 00:16:27.822 TEST_HEADER include/spdk/notify.h 00:16:27.822 TEST_HEADER include/spdk/nvme.h 00:16:27.822 TEST_HEADER include/spdk/nvme_intel.h 00:16:27.822 TEST_HEADER include/spdk/nvme_ocssd.h 00:16:27.822 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:16:27.822 TEST_HEADER include/spdk/nvme_spec.h 00:16:27.822 TEST_HEADER include/spdk/nvme_zns.h 00:16:27.822 TEST_HEADER include/spdk/nvmf_cmd.h 00:16:27.822 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:16:27.823 CC test/app/bdev_svc/bdev_svc.o 00:16:27.823 TEST_HEADER include/spdk/nvmf.h 00:16:27.823 TEST_HEADER include/spdk/nvmf_spec.h 00:16:27.823 TEST_HEADER include/spdk/nvmf_transport.h 00:16:27.823 TEST_HEADER include/spdk/opal.h 00:16:27.823 TEST_HEADER include/spdk/opal_spec.h 00:16:27.823 CC test/dma/test_dma/test_dma.o 00:16:27.823 TEST_HEADER include/spdk/pci_ids.h 00:16:27.823 TEST_HEADER include/spdk/pipe.h 00:16:27.823 TEST_HEADER include/spdk/queue.h 00:16:27.823 TEST_HEADER include/spdk/reduce.h 00:16:27.823 TEST_HEADER include/spdk/rpc.h 00:16:27.823 TEST_HEADER include/spdk/scheduler.h 00:16:27.823 TEST_HEADER include/spdk/scsi.h 00:16:27.823 TEST_HEADER include/spdk/scsi_spec.h 00:16:27.823 TEST_HEADER include/spdk/sock.h 00:16:27.823 TEST_HEADER include/spdk/stdinc.h 00:16:27.823 TEST_HEADER include/spdk/string.h 00:16:27.823 TEST_HEADER include/spdk/thread.h 00:16:27.823 TEST_HEADER include/spdk/trace.h 00:16:27.823 TEST_HEADER include/spdk/trace_parser.h 00:16:27.823 TEST_HEADER include/spdk/tree.h 00:16:27.823 LINK rpc_client_test 00:16:27.823 CC test/env/mem_callbacks/mem_callbacks.o 00:16:27.823 TEST_HEADER include/spdk/ublk.h 00:16:27.823 TEST_HEADER include/spdk/util.h 00:16:27.823 TEST_HEADER include/spdk/uuid.h 00:16:27.823 TEST_HEADER include/spdk/version.h 00:16:27.823 TEST_HEADER include/spdk/vfio_user_pci.h 00:16:27.823 TEST_HEADER include/spdk/vfio_user_spec.h 00:16:27.823 TEST_HEADER include/spdk/vhost.h 00:16:27.823 TEST_HEADER include/spdk/vmd.h 00:16:27.823 TEST_HEADER include/spdk/xor.h 00:16:27.823 LINK poller_perf 00:16:27.823 TEST_HEADER include/spdk/zipf.h 00:16:27.823 CXX test/cpp_headers/accel.o 00:16:28.079 LINK nvmf_tgt 00:16:28.079 LINK zipf 00:16:28.079 LINK spdk_trace_record 00:16:28.079 LINK bdev_svc 00:16:28.079 CXX test/cpp_headers/accel_module.o 00:16:28.079 LINK spdk_trace 00:16:28.079 CXX test/cpp_headers/assert.o 00:16:28.079 CC app/iscsi_tgt/iscsi_tgt.o 00:16:28.079 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:16:28.079 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:16:28.335 CC examples/ioat/perf/perf.o 00:16:28.335 CXX test/cpp_headers/barrier.o 00:16:28.335 CC examples/vmd/lsvmd/lsvmd.o 00:16:28.335 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:16:28.335 LINK iscsi_tgt 00:16:28.335 CC test/event/event_perf/event_perf.o 00:16:28.335 LINK test_dma 00:16:28.335 CXX test/cpp_headers/base64.o 00:16:28.335 LINK mem_callbacks 00:16:28.335 LINK ioat_perf 00:16:28.335 LINK lsvmd 00:16:28.593 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:16:28.593 LINK event_perf 00:16:28.593 CXX test/cpp_headers/bdev.o 00:16:28.593 CXX test/cpp_headers/bdev_module.o 00:16:28.593 CC test/env/vtophys/vtophys.o 00:16:28.593 LINK nvme_fuzz 00:16:28.593 CC app/spdk_tgt/spdk_tgt.o 00:16:28.593 CC examples/ioat/verify/verify.o 00:16:28.593 CC examples/vmd/led/led.o 00:16:28.593 CC test/event/reactor/reactor.o 00:16:28.593 LINK vtophys 00:16:28.593 CXX test/cpp_headers/bdev_zone.o 00:16:28.851 CC app/spdk_lspci/spdk_lspci.o 00:16:28.851 LINK led 00:16:28.851 LINK reactor 00:16:28.851 LINK verify 00:16:28.851 LINK spdk_tgt 00:16:28.851 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:16:28.851 CC test/accel/dif/dif.o 00:16:28.851 CXX test/cpp_headers/bit_array.o 00:16:28.851 LINK vhost_fuzz 00:16:28.851 LINK spdk_lspci 00:16:28.851 CC test/event/reactor_perf/reactor_perf.o 00:16:29.107 LINK env_dpdk_post_init 00:16:29.107 CC test/event/app_repeat/app_repeat.o 00:16:29.107 CXX test/cpp_headers/bit_pool.o 00:16:29.107 CC test/event/scheduler/scheduler.o 00:16:29.107 CC examples/idxd/perf/perf.o 00:16:29.107 CC app/spdk_nvme_perf/perf.o 00:16:29.107 LINK reactor_perf 00:16:29.107 LINK app_repeat 00:16:29.107 CXX test/cpp_headers/blob_bdev.o 00:16:29.107 CC test/blobfs/mkfs/mkfs.o 00:16:29.107 CC test/env/memory/memory_ut.o 00:16:29.364 LINK scheduler 00:16:29.364 CXX test/cpp_headers/blobfs_bdev.o 00:16:29.364 LINK mkfs 00:16:29.364 CC test/nvme/aer/aer.o 00:16:29.364 CXX test/cpp_headers/blobfs.o 00:16:29.364 LINK idxd_perf 00:16:29.364 CC test/lvol/esnap/esnap.o 00:16:29.364 CC test/nvme/reset/reset.o 00:16:29.622 CXX test/cpp_headers/blob.o 00:16:29.622 LINK dif 00:16:29.622 CC test/nvme/sgl/sgl.o 00:16:29.622 CC examples/interrupt_tgt/interrupt_tgt.o 00:16:29.622 CXX test/cpp_headers/conf.o 00:16:29.622 LINK aer 00:16:29.879 LINK reset 00:16:29.879 LINK interrupt_tgt 00:16:29.879 CC test/nvme/e2edp/nvme_dp.o 00:16:29.879 CXX test/cpp_headers/config.o 00:16:29.879 CXX test/cpp_headers/cpuset.o 00:16:29.879 LINK sgl 00:16:29.879 CXX test/cpp_headers/crc16.o 00:16:29.879 LINK iscsi_fuzz 00:16:29.879 LINK spdk_nvme_perf 00:16:30.137 CXX test/cpp_headers/crc32.o 00:16:30.137 CC test/bdev/bdevio/bdevio.o 00:16:30.137 LINK nvme_dp 00:16:30.137 CC test/nvme/overhead/overhead.o 00:16:30.137 CC examples/thread/thread/thread_ex.o 00:16:30.137 CC examples/sock/hello_world/hello_sock.o 00:16:30.137 CC app/spdk_nvme_identify/identify.o 00:16:30.137 CXX test/cpp_headers/crc64.o 00:16:30.137 CC test/app/histogram_perf/histogram_perf.o 00:16:30.137 CXX test/cpp_headers/dif.o 00:16:30.137 LINK memory_ut 00:16:30.394 CXX test/cpp_headers/dma.o 00:16:30.394 LINK thread 00:16:30.394 LINK histogram_perf 00:16:30.394 CXX test/cpp_headers/endian.o 00:16:30.394 LINK overhead 00:16:30.394 LINK hello_sock 00:16:30.394 LINK bdevio 00:16:30.394 CC test/env/pci/pci_ut.o 00:16:30.394 CXX test/cpp_headers/env_dpdk.o 00:16:30.394 CXX test/cpp_headers/env.o 00:16:30.394 CC test/app/jsoncat/jsoncat.o 00:16:30.394 CC test/app/stub/stub.o 00:16:30.653 CC test/nvme/err_injection/err_injection.o 00:16:30.653 CC examples/nvme/hello_world/hello_world.o 00:16:30.653 CXX test/cpp_headers/event.o 00:16:30.653 CC examples/nvme/reconnect/reconnect.o 00:16:30.653 CC examples/nvme/nvme_manage/nvme_manage.o 00:16:30.653 LINK jsoncat 00:16:30.653 LINK stub 00:16:30.653 LINK err_injection 00:16:30.653 CXX test/cpp_headers/fd_group.o 00:16:30.653 LINK spdk_nvme_identify 00:16:30.986 LINK hello_world 00:16:30.986 LINK pci_ut 00:16:30.986 CXX test/cpp_headers/fd.o 00:16:30.986 CC examples/accel/perf/accel_perf.o 00:16:30.986 CC test/nvme/startup/startup.o 00:16:30.986 CC app/spdk_nvme_discover/discovery_aer.o 00:16:30.986 CC examples/blob/hello_world/hello_blob.o 00:16:30.986 LINK reconnect 00:16:30.986 CC test/nvme/reserve/reserve.o 00:16:30.986 CXX test/cpp_headers/file.o 00:16:30.986 CXX test/cpp_headers/fsdev.o 00:16:30.986 LINK startup 00:16:30.986 CXX test/cpp_headers/fsdev_module.o 00:16:30.986 LINK nvme_manage 00:16:31.246 LINK spdk_nvme_discover 00:16:31.246 LINK hello_blob 00:16:31.246 LINK reserve 00:16:31.246 CC examples/nvme/arbitration/arbitration.o 00:16:31.246 CC test/nvme/simple_copy/simple_copy.o 00:16:31.246 CXX test/cpp_headers/ftl.o 00:16:31.246 CC examples/blob/cli/blobcli.o 00:16:31.246 CC test/nvme/connect_stress/connect_stress.o 00:16:31.246 CC app/spdk_top/spdk_top.o 00:16:31.246 CC test/nvme/boot_partition/boot_partition.o 00:16:31.504 LINK accel_perf 00:16:31.504 CXX test/cpp_headers/fuse_dispatcher.o 00:16:31.504 CC app/vhost/vhost.o 00:16:31.504 LINK simple_copy 00:16:31.504 LINK arbitration 00:16:31.504 LINK connect_stress 00:16:31.504 LINK boot_partition 00:16:31.504 CXX test/cpp_headers/gpt_spec.o 00:16:31.504 LINK vhost 00:16:31.763 CC examples/nvme/hotplug/hotplug.o 00:16:31.763 CXX test/cpp_headers/hexlify.o 00:16:31.763 CC examples/fsdev/hello_world/hello_fsdev.o 00:16:31.763 CC app/spdk_dd/spdk_dd.o 00:16:31.764 CC test/nvme/compliance/nvme_compliance.o 00:16:31.764 CC examples/bdev/hello_world/hello_bdev.o 00:16:31.764 LINK blobcli 00:16:31.764 CXX test/cpp_headers/histogram_data.o 00:16:31.764 CC examples/bdev/bdevperf/bdevperf.o 00:16:32.022 LINK hotplug 00:16:32.022 CXX test/cpp_headers/idxd.o 00:16:32.023 LINK hello_fsdev 00:16:32.023 LINK hello_bdev 00:16:32.023 LINK nvme_compliance 00:16:32.023 LINK spdk_dd 00:16:32.023 CC app/fio/nvme/fio_plugin.o 00:16:32.023 CXX test/cpp_headers/idxd_spec.o 00:16:32.281 CC examples/nvme/cmb_copy/cmb_copy.o 00:16:32.281 CC test/nvme/fused_ordering/fused_ordering.o 00:16:32.281 CC app/fio/bdev/fio_plugin.o 00:16:32.281 CC test/nvme/doorbell_aers/doorbell_aers.o 00:16:32.281 LINK spdk_top 00:16:32.281 CXX test/cpp_headers/init.o 00:16:32.281 CC test/nvme/fdp/fdp.o 00:16:32.281 LINK cmb_copy 00:16:32.543 LINK fused_ordering 00:16:32.543 CXX test/cpp_headers/ioat.o 00:16:32.543 LINK doorbell_aers 00:16:32.543 CC examples/nvme/abort/abort.o 00:16:32.543 CXX test/cpp_headers/ioat_spec.o 00:16:32.543 LINK spdk_nvme 00:16:32.543 CXX test/cpp_headers/iscsi_spec.o 00:16:32.543 CC test/nvme/cuse/cuse.o 00:16:32.543 CXX test/cpp_headers/json.o 00:16:32.543 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:16:32.543 LINK fdp 00:16:32.801 CXX test/cpp_headers/jsonrpc.o 00:16:32.801 CXX test/cpp_headers/keyring.o 00:16:32.801 LINK bdevperf 00:16:32.801 LINK spdk_bdev 00:16:32.801 LINK pmr_persistence 00:16:32.801 CXX test/cpp_headers/keyring_module.o 00:16:32.801 CXX test/cpp_headers/likely.o 00:16:32.801 CXX test/cpp_headers/log.o 00:16:32.801 CXX test/cpp_headers/lvol.o 00:16:32.801 CXX test/cpp_headers/md5.o 00:16:32.801 CXX test/cpp_headers/memory.o 00:16:32.801 CXX test/cpp_headers/mmio.o 00:16:32.801 LINK abort 00:16:33.058 CXX test/cpp_headers/nbd.o 00:16:33.058 CXX test/cpp_headers/net.o 00:16:33.058 CXX test/cpp_headers/notify.o 00:16:33.058 CXX test/cpp_headers/nvme.o 00:16:33.058 CXX test/cpp_headers/nvme_intel.o 00:16:33.058 CXX test/cpp_headers/nvme_ocssd.o 00:16:33.058 CXX test/cpp_headers/nvme_ocssd_spec.o 00:16:33.058 CXX test/cpp_headers/nvme_spec.o 00:16:33.058 CXX test/cpp_headers/nvme_zns.o 00:16:33.058 CXX test/cpp_headers/nvmf_cmd.o 00:16:33.058 CXX test/cpp_headers/nvmf_fc_spec.o 00:16:33.058 CXX test/cpp_headers/nvmf.o 00:16:33.058 CXX test/cpp_headers/nvmf_spec.o 00:16:33.058 CXX test/cpp_headers/nvmf_transport.o 00:16:33.317 CC examples/nvmf/nvmf/nvmf.o 00:16:33.317 CXX test/cpp_headers/opal.o 00:16:33.317 CXX test/cpp_headers/opal_spec.o 00:16:33.317 CXX test/cpp_headers/pci_ids.o 00:16:33.317 CXX test/cpp_headers/pipe.o 00:16:33.317 CXX test/cpp_headers/queue.o 00:16:33.317 CXX test/cpp_headers/reduce.o 00:16:33.317 CXX test/cpp_headers/rpc.o 00:16:33.317 CXX test/cpp_headers/scheduler.o 00:16:33.317 CXX test/cpp_headers/scsi.o 00:16:33.317 CXX test/cpp_headers/scsi_spec.o 00:16:33.317 CXX test/cpp_headers/sock.o 00:16:33.317 CXX test/cpp_headers/stdinc.o 00:16:33.575 CXX test/cpp_headers/string.o 00:16:33.575 CXX test/cpp_headers/thread.o 00:16:33.575 CXX test/cpp_headers/trace.o 00:16:33.575 CXX test/cpp_headers/trace_parser.o 00:16:33.575 CXX test/cpp_headers/tree.o 00:16:33.575 LINK nvmf 00:16:33.575 CXX test/cpp_headers/ublk.o 00:16:33.575 CXX test/cpp_headers/util.o 00:16:33.575 CXX test/cpp_headers/uuid.o 00:16:33.575 CXX test/cpp_headers/version.o 00:16:33.575 CXX test/cpp_headers/vfio_user_pci.o 00:16:33.575 CXX test/cpp_headers/vfio_user_spec.o 00:16:33.575 CXX test/cpp_headers/vhost.o 00:16:33.575 CXX test/cpp_headers/vmd.o 00:16:33.575 CXX test/cpp_headers/xor.o 00:16:33.833 CXX test/cpp_headers/zipf.o 00:16:33.833 LINK cuse 00:16:35.219 LINK esnap 00:16:35.219 00:16:35.219 real 1m8.633s 00:16:35.219 user 6m26.234s 00:16:35.219 sys 1m5.544s 00:16:35.219 12:48:17 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:16:35.219 12:48:17 make -- common/autotest_common.sh@10 -- $ set +x 00:16:35.219 ************************************ 00:16:35.219 END TEST make 00:16:35.219 ************************************ 00:16:35.219 12:48:17 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:16:35.219 12:48:17 -- pm/common@29 -- $ signal_monitor_resources TERM 00:16:35.219 12:48:17 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:16:35.219 12:48:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:16:35.219 12:48:17 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:16:35.219 12:48:17 -- pm/common@44 -- $ pid=5021 00:16:35.219 12:48:17 -- pm/common@50 -- $ kill -TERM 5021 00:16:35.219 12:48:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:16:35.219 12:48:17 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:16:35.219 12:48:17 -- pm/common@44 -- $ pid=5022 00:16:35.219 12:48:17 -- pm/common@50 -- $ kill -TERM 5022 00:16:35.219 12:48:17 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:16:35.219 12:48:17 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:16:35.478 12:48:17 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:35.478 12:48:17 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:35.478 12:48:17 -- common/autotest_common.sh@1711 -- # lcov --version 00:16:35.478 12:48:17 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:35.478 12:48:17 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:35.478 12:48:17 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:35.478 12:48:17 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:35.478 12:48:17 -- scripts/common.sh@336 -- # IFS=.-: 00:16:35.478 12:48:17 -- scripts/common.sh@336 -- # read -ra ver1 00:16:35.478 12:48:17 -- scripts/common.sh@337 -- # IFS=.-: 00:16:35.478 12:48:17 -- scripts/common.sh@337 -- # read -ra ver2 00:16:35.478 12:48:17 -- scripts/common.sh@338 -- # local 'op=<' 00:16:35.478 12:48:17 -- scripts/common.sh@340 -- # ver1_l=2 00:16:35.478 12:48:17 -- scripts/common.sh@341 -- # ver2_l=1 00:16:35.478 12:48:17 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:35.478 12:48:17 -- scripts/common.sh@344 -- # case "$op" in 00:16:35.478 12:48:17 -- scripts/common.sh@345 -- # : 1 00:16:35.478 12:48:17 -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:35.478 12:48:17 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:35.478 12:48:17 -- scripts/common.sh@365 -- # decimal 1 00:16:35.478 12:48:17 -- scripts/common.sh@353 -- # local d=1 00:16:35.478 12:48:17 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:35.478 12:48:17 -- scripts/common.sh@355 -- # echo 1 00:16:35.478 12:48:17 -- scripts/common.sh@365 -- # ver1[v]=1 00:16:35.478 12:48:17 -- scripts/common.sh@366 -- # decimal 2 00:16:35.478 12:48:17 -- scripts/common.sh@353 -- # local d=2 00:16:35.478 12:48:17 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:35.478 12:48:17 -- scripts/common.sh@355 -- # echo 2 00:16:35.478 12:48:17 -- scripts/common.sh@366 -- # ver2[v]=2 00:16:35.478 12:48:17 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:35.478 12:48:17 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:35.478 12:48:17 -- scripts/common.sh@368 -- # return 0 00:16:35.478 12:48:17 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:35.478 12:48:17 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:35.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.478 --rc genhtml_branch_coverage=1 00:16:35.478 --rc genhtml_function_coverage=1 00:16:35.478 --rc genhtml_legend=1 00:16:35.478 --rc geninfo_all_blocks=1 00:16:35.478 --rc geninfo_unexecuted_blocks=1 00:16:35.478 00:16:35.478 ' 00:16:35.478 12:48:17 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:35.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.478 --rc genhtml_branch_coverage=1 00:16:35.478 --rc genhtml_function_coverage=1 00:16:35.478 --rc genhtml_legend=1 00:16:35.478 --rc geninfo_all_blocks=1 00:16:35.478 --rc geninfo_unexecuted_blocks=1 00:16:35.478 00:16:35.478 ' 00:16:35.478 12:48:17 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:35.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.478 --rc genhtml_branch_coverage=1 00:16:35.478 --rc genhtml_function_coverage=1 00:16:35.478 --rc genhtml_legend=1 00:16:35.478 --rc geninfo_all_blocks=1 00:16:35.478 --rc geninfo_unexecuted_blocks=1 00:16:35.478 00:16:35.478 ' 00:16:35.478 12:48:17 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:35.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.478 --rc genhtml_branch_coverage=1 00:16:35.478 --rc genhtml_function_coverage=1 00:16:35.478 --rc genhtml_legend=1 00:16:35.478 --rc geninfo_all_blocks=1 00:16:35.478 --rc geninfo_unexecuted_blocks=1 00:16:35.478 00:16:35.478 ' 00:16:35.478 12:48:17 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:35.478 12:48:17 -- nvmf/common.sh@7 -- # uname -s 00:16:35.478 12:48:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:35.478 12:48:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:35.478 12:48:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:35.478 12:48:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:35.478 12:48:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:35.478 12:48:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:35.478 12:48:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:35.478 12:48:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:35.478 12:48:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:35.478 12:48:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:35.478 12:48:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ea58e83f-bd42-45fc-a617-d0e3b2b9b56b 00:16:35.478 12:48:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=ea58e83f-bd42-45fc-a617-d0e3b2b9b56b 00:16:35.478 12:48:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:35.478 12:48:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:35.478 12:48:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:16:35.478 12:48:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:35.478 12:48:17 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:35.478 12:48:17 -- scripts/common.sh@15 -- # shopt -s extglob 00:16:35.478 12:48:17 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:35.478 12:48:17 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:35.478 12:48:17 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:35.478 12:48:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.478 12:48:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.478 12:48:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.478 12:48:17 -- paths/export.sh@5 -- # export PATH 00:16:35.478 12:48:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.478 12:48:17 -- nvmf/common.sh@51 -- # : 0 00:16:35.478 12:48:17 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:35.478 12:48:17 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:35.478 12:48:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:35.478 12:48:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:35.478 12:48:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:35.478 12:48:17 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:35.478 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:35.478 12:48:17 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:35.478 12:48:17 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:35.478 12:48:17 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:35.478 12:48:17 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:16:35.478 12:48:17 -- spdk/autotest.sh@32 -- # uname -s 00:16:35.478 12:48:17 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:16:35.478 12:48:17 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:16:35.478 12:48:17 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:16:35.478 12:48:17 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:16:35.478 12:48:17 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:16:35.478 12:48:17 -- spdk/autotest.sh@44 -- # modprobe nbd 00:16:35.478 12:48:17 -- spdk/autotest.sh@46 -- # type -P udevadm 00:16:35.478 12:48:17 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:16:35.478 12:48:17 -- spdk/autotest.sh@48 -- # udevadm_pid=53695 00:16:35.478 12:48:17 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:16:35.478 12:48:17 -- pm/common@17 -- # local monitor 00:16:35.479 12:48:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:16:35.479 12:48:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:16:35.479 12:48:17 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:16:35.479 12:48:17 -- pm/common@25 -- # sleep 1 00:16:35.479 12:48:17 -- pm/common@21 -- # date +%s 00:16:35.479 12:48:17 -- pm/common@21 -- # date +%s 00:16:35.479 12:48:17 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733402897 00:16:35.479 12:48:17 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733402897 00:16:35.479 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733402897_collect-cpu-load.pm.log 00:16:35.479 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733402897_collect-vmstat.pm.log 00:16:36.410 12:48:18 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:16:36.410 12:48:18 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:16:36.410 12:48:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:36.410 12:48:18 -- common/autotest_common.sh@10 -- # set +x 00:16:36.410 12:48:18 -- spdk/autotest.sh@59 -- # create_test_list 00:16:36.410 12:48:18 -- common/autotest_common.sh@752 -- # xtrace_disable 00:16:36.410 12:48:18 -- common/autotest_common.sh@10 -- # set +x 00:16:36.668 12:48:19 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:16:36.668 12:48:19 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:16:36.668 12:48:19 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:16:36.668 12:48:19 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:16:36.668 12:48:19 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:16:36.668 12:48:19 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:16:36.668 12:48:19 -- common/autotest_common.sh@1457 -- # uname 00:16:36.668 12:48:19 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:16:36.668 12:48:19 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:16:36.668 12:48:19 -- common/autotest_common.sh@1477 -- # uname 00:16:36.668 12:48:19 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:16:36.668 12:48:19 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:16:36.668 12:48:19 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:16:36.668 lcov: LCOV version 1.15 00:16:36.668 12:48:19 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:16:51.554 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:16:51.554 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:17:06.426 12:48:47 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:17:06.426 12:48:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:06.426 12:48:47 -- common/autotest_common.sh@10 -- # set +x 00:17:06.426 12:48:47 -- spdk/autotest.sh@78 -- # rm -f 00:17:06.426 12:48:47 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:06.426 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:06.426 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:17:06.426 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:17:06.426 12:48:47 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:17:06.426 12:48:47 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:17:06.426 12:48:47 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:17:06.426 12:48:47 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:17:06.426 12:48:47 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:17:06.426 12:48:47 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:17:06.426 12:48:47 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:17:06.426 12:48:47 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:17:06.426 12:48:47 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:17:06.426 12:48:47 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:17:06.426 12:48:47 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:17:06.426 12:48:47 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:06.426 12:48:47 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:06.426 12:48:47 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:17:06.426 12:48:47 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:17:06.426 12:48:47 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:17:06.426 12:48:47 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:17:06.426 12:48:47 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:17:06.426 12:48:47 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:06.426 12:48:47 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:06.426 12:48:47 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:17:06.426 12:48:47 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:17:06.426 12:48:47 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:17:06.426 12:48:47 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:17:06.426 12:48:47 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:06.426 12:48:47 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:17:06.426 12:48:47 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:17:06.426 12:48:47 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:17:06.426 12:48:47 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:17:06.426 12:48:47 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:06.426 12:48:47 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:17:06.426 12:48:47 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:17:06.426 12:48:47 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:17:06.426 12:48:47 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:17:06.426 12:48:47 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:17:06.426 12:48:47 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:17:06.426 No valid GPT data, bailing 00:17:06.426 12:48:47 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:06.426 12:48:47 -- scripts/common.sh@394 -- # pt= 00:17:06.426 12:48:47 -- scripts/common.sh@395 -- # return 1 00:17:06.426 12:48:47 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:17:06.426 1+0 records in 00:17:06.426 1+0 records out 00:17:06.426 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00468617 s, 224 MB/s 00:17:06.426 12:48:47 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:17:06.426 12:48:47 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:17:06.426 12:48:47 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:17:06.426 12:48:47 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:17:06.426 12:48:47 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:17:06.426 No valid GPT data, bailing 00:17:06.426 12:48:47 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:06.426 12:48:47 -- scripts/common.sh@394 -- # pt= 00:17:06.426 12:48:47 -- scripts/common.sh@395 -- # return 1 00:17:06.426 12:48:47 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:17:06.426 1+0 records in 00:17:06.426 1+0 records out 00:17:06.426 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00540637 s, 194 MB/s 00:17:06.426 12:48:47 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:17:06.426 12:48:47 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:17:06.426 12:48:47 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:17:06.426 12:48:47 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:17:06.426 12:48:47 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:17:06.426 No valid GPT data, bailing 00:17:06.426 12:48:47 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:17:06.426 12:48:47 -- scripts/common.sh@394 -- # pt= 00:17:06.426 12:48:47 -- scripts/common.sh@395 -- # return 1 00:17:06.426 12:48:47 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:17:06.426 1+0 records in 00:17:06.426 1+0 records out 00:17:06.426 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00307253 s, 341 MB/s 00:17:06.426 12:48:47 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:17:06.426 12:48:47 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:17:06.426 12:48:47 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:17:06.427 12:48:47 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:17:06.427 12:48:47 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:17:06.427 No valid GPT data, bailing 00:17:06.427 12:48:47 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:17:06.427 12:48:47 -- scripts/common.sh@394 -- # pt= 00:17:06.427 12:48:47 -- scripts/common.sh@395 -- # return 1 00:17:06.427 12:48:47 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:17:06.427 1+0 records in 00:17:06.427 1+0 records out 00:17:06.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00264175 s, 397 MB/s 00:17:06.427 12:48:47 -- spdk/autotest.sh@105 -- # sync 00:17:06.427 12:48:48 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:17:06.427 12:48:48 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:17:06.427 12:48:48 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:17:06.992 12:48:49 -- spdk/autotest.sh@111 -- # uname -s 00:17:06.992 12:48:49 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:17:06.992 12:48:49 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:17:06.992 12:48:49 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:17:07.557 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:07.557 Hugepages 00:17:07.557 node hugesize free / total 00:17:07.557 node0 1048576kB 0 / 0 00:17:07.557 node0 2048kB 0 / 0 00:17:07.557 00:17:07.557 Type BDF Vendor Device NUMA Driver Device Block devices 00:17:07.557 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:17:07.557 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:17:07.815 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:17:07.815 12:48:50 -- spdk/autotest.sh@117 -- # uname -s 00:17:07.815 12:48:50 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:17:07.815 12:48:50 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:17:07.815 12:48:50 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:08.381 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:08.381 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:08.381 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:08.381 12:48:50 -- common/autotest_common.sh@1517 -- # sleep 1 00:17:09.315 12:48:51 -- common/autotest_common.sh@1518 -- # bdfs=() 00:17:09.315 12:48:51 -- common/autotest_common.sh@1518 -- # local bdfs 00:17:09.315 12:48:51 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:17:09.315 12:48:51 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:17:09.315 12:48:51 -- common/autotest_common.sh@1498 -- # bdfs=() 00:17:09.315 12:48:51 -- common/autotest_common.sh@1498 -- # local bdfs 00:17:09.315 12:48:51 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:17:09.315 12:48:51 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:09.315 12:48:51 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:17:09.572 12:48:51 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:17:09.572 12:48:51 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:17:09.572 12:48:51 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:09.830 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:09.830 Waiting for block devices as requested 00:17:09.830 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:09.830 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:09.830 12:48:52 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:17:09.830 12:48:52 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:17:09.830 12:48:52 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:17:09.830 12:48:52 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:17:09.830 12:48:52 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:17:09.830 12:48:52 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:17:09.830 12:48:52 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:17:09.830 12:48:52 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:17:09.830 12:48:52 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:17:09.830 12:48:52 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:17:09.830 12:48:52 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:17:09.830 12:48:52 -- common/autotest_common.sh@1531 -- # grep oacs 00:17:09.830 12:48:52 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:17:09.830 12:48:52 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:17:09.830 12:48:52 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:17:09.830 12:48:52 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:17:09.830 12:48:52 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:17:09.830 12:48:52 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:17:09.830 12:48:52 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:17:09.830 12:48:52 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:17:09.830 12:48:52 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:17:09.830 12:48:52 -- common/autotest_common.sh@1543 -- # continue 00:17:09.830 12:48:52 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:17:09.830 12:48:52 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:17:09.830 12:48:52 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:17:09.830 12:48:52 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:17:09.830 12:48:52 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:17:09.830 12:48:52 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:17:09.830 12:48:52 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:17:09.830 12:48:52 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:17:09.830 12:48:52 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:17:09.830 12:48:52 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:17:09.830 12:48:52 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:17:09.830 12:48:52 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:17:09.830 12:48:52 -- common/autotest_common.sh@1531 -- # grep oacs 00:17:09.830 12:48:52 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:17:09.830 12:48:52 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:17:09.830 12:48:52 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:17:09.830 12:48:52 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:17:09.830 12:48:52 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:17:09.830 12:48:52 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:17:09.830 12:48:52 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:17:09.831 12:48:52 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:17:09.831 12:48:52 -- common/autotest_common.sh@1543 -- # continue 00:17:09.831 12:48:52 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:17:09.831 12:48:52 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:09.831 12:48:52 -- common/autotest_common.sh@10 -- # set +x 00:17:10.088 12:48:52 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:17:10.088 12:48:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:10.088 12:48:52 -- common/autotest_common.sh@10 -- # set +x 00:17:10.088 12:48:52 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:10.346 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:10.603 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:10.603 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:10.603 12:48:53 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:17:10.603 12:48:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:10.603 12:48:53 -- common/autotest_common.sh@10 -- # set +x 00:17:10.603 12:48:53 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:17:10.603 12:48:53 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:17:10.603 12:48:53 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:17:10.604 12:48:53 -- common/autotest_common.sh@1563 -- # bdfs=() 00:17:10.604 12:48:53 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:17:10.604 12:48:53 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:17:10.604 12:48:53 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:17:10.604 12:48:53 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:17:10.604 12:48:53 -- common/autotest_common.sh@1498 -- # bdfs=() 00:17:10.604 12:48:53 -- common/autotest_common.sh@1498 -- # local bdfs 00:17:10.604 12:48:53 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:17:10.604 12:48:53 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:10.604 12:48:53 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:17:10.604 12:48:53 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:17:10.604 12:48:53 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:17:10.604 12:48:53 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:17:10.604 12:48:53 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:17:10.604 12:48:53 -- common/autotest_common.sh@1566 -- # device=0x0010 00:17:10.604 12:48:53 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:17:10.604 12:48:53 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:17:10.604 12:48:53 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:17:10.604 12:48:53 -- common/autotest_common.sh@1566 -- # device=0x0010 00:17:10.604 12:48:53 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:17:10.604 12:48:53 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:17:10.604 12:48:53 -- common/autotest_common.sh@1572 -- # return 0 00:17:10.604 12:48:53 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:17:10.604 12:48:53 -- common/autotest_common.sh@1580 -- # return 0 00:17:10.604 12:48:53 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:17:10.604 12:48:53 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:17:10.604 12:48:53 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:17:10.604 12:48:53 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:17:10.604 12:48:53 -- spdk/autotest.sh@149 -- # timing_enter lib 00:17:10.604 12:48:53 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:10.604 12:48:53 -- common/autotest_common.sh@10 -- # set +x 00:17:10.862 12:48:53 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:17:10.862 12:48:53 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:17:10.862 12:48:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:10.862 12:48:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:10.862 12:48:53 -- common/autotest_common.sh@10 -- # set +x 00:17:10.862 ************************************ 00:17:10.862 START TEST env 00:17:10.862 ************************************ 00:17:10.862 12:48:53 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:17:10.862 * Looking for test storage... 00:17:10.862 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:17:10.862 12:48:53 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:10.862 12:48:53 env -- common/autotest_common.sh@1711 -- # lcov --version 00:17:10.862 12:48:53 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:10.862 12:48:53 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:10.862 12:48:53 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:10.862 12:48:53 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:10.862 12:48:53 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:10.862 12:48:53 env -- scripts/common.sh@336 -- # IFS=.-: 00:17:10.862 12:48:53 env -- scripts/common.sh@336 -- # read -ra ver1 00:17:10.862 12:48:53 env -- scripts/common.sh@337 -- # IFS=.-: 00:17:10.862 12:48:53 env -- scripts/common.sh@337 -- # read -ra ver2 00:17:10.862 12:48:53 env -- scripts/common.sh@338 -- # local 'op=<' 00:17:10.862 12:48:53 env -- scripts/common.sh@340 -- # ver1_l=2 00:17:10.862 12:48:53 env -- scripts/common.sh@341 -- # ver2_l=1 00:17:10.862 12:48:53 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:10.862 12:48:53 env -- scripts/common.sh@344 -- # case "$op" in 00:17:10.862 12:48:53 env -- scripts/common.sh@345 -- # : 1 00:17:10.862 12:48:53 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:10.862 12:48:53 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:10.862 12:48:53 env -- scripts/common.sh@365 -- # decimal 1 00:17:10.862 12:48:53 env -- scripts/common.sh@353 -- # local d=1 00:17:10.862 12:48:53 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:10.862 12:48:53 env -- scripts/common.sh@355 -- # echo 1 00:17:10.862 12:48:53 env -- scripts/common.sh@365 -- # ver1[v]=1 00:17:10.862 12:48:53 env -- scripts/common.sh@366 -- # decimal 2 00:17:10.862 12:48:53 env -- scripts/common.sh@353 -- # local d=2 00:17:10.862 12:48:53 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:10.862 12:48:53 env -- scripts/common.sh@355 -- # echo 2 00:17:10.862 12:48:53 env -- scripts/common.sh@366 -- # ver2[v]=2 00:17:10.862 12:48:53 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:10.862 12:48:53 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:10.862 12:48:53 env -- scripts/common.sh@368 -- # return 0 00:17:10.862 12:48:53 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:10.862 12:48:53 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:10.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.862 --rc genhtml_branch_coverage=1 00:17:10.862 --rc genhtml_function_coverage=1 00:17:10.862 --rc genhtml_legend=1 00:17:10.862 --rc geninfo_all_blocks=1 00:17:10.862 --rc geninfo_unexecuted_blocks=1 00:17:10.862 00:17:10.862 ' 00:17:10.862 12:48:53 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:10.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.862 --rc genhtml_branch_coverage=1 00:17:10.862 --rc genhtml_function_coverage=1 00:17:10.862 --rc genhtml_legend=1 00:17:10.862 --rc geninfo_all_blocks=1 00:17:10.862 --rc geninfo_unexecuted_blocks=1 00:17:10.862 00:17:10.862 ' 00:17:10.862 12:48:53 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:10.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.862 --rc genhtml_branch_coverage=1 00:17:10.862 --rc genhtml_function_coverage=1 00:17:10.862 --rc genhtml_legend=1 00:17:10.862 --rc geninfo_all_blocks=1 00:17:10.862 --rc geninfo_unexecuted_blocks=1 00:17:10.862 00:17:10.862 ' 00:17:10.862 12:48:53 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:10.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.862 --rc genhtml_branch_coverage=1 00:17:10.862 --rc genhtml_function_coverage=1 00:17:10.862 --rc genhtml_legend=1 00:17:10.862 --rc geninfo_all_blocks=1 00:17:10.862 --rc geninfo_unexecuted_blocks=1 00:17:10.862 00:17:10.862 ' 00:17:10.862 12:48:53 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:17:10.862 12:48:53 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:10.862 12:48:53 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:10.862 12:48:53 env -- common/autotest_common.sh@10 -- # set +x 00:17:10.862 ************************************ 00:17:10.862 START TEST env_memory 00:17:10.862 ************************************ 00:17:10.862 12:48:53 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:17:10.862 00:17:10.862 00:17:10.862 CUnit - A unit testing framework for C - Version 2.1-3 00:17:10.862 http://cunit.sourceforge.net/ 00:17:10.862 00:17:10.862 00:17:10.862 Suite: memory 00:17:10.862 Test: alloc and free memory map ...[2024-12-05 12:48:53.389301] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:17:10.862 passed 00:17:10.862 Test: mem map translation ...[2024-12-05 12:48:53.428127] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:17:10.862 [2024-12-05 12:48:53.428185] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:17:10.862 [2024-12-05 12:48:53.428245] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:17:10.862 [2024-12-05 12:48:53.428260] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:17:11.120 passed 00:17:11.120 Test: mem map registration ...[2024-12-05 12:48:53.497173] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:17:11.120 [2024-12-05 12:48:53.497246] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:17:11.120 passed 00:17:11.120 Test: mem map adjacent registrations ...passed 00:17:11.120 00:17:11.120 Run Summary: Type Total Ran Passed Failed Inactive 00:17:11.120 suites 1 1 n/a 0 0 00:17:11.120 tests 4 4 4 0 0 00:17:11.120 asserts 152 152 152 0 n/a 00:17:11.120 00:17:11.120 Elapsed time = 0.242 seconds 00:17:11.120 00:17:11.120 real 0m0.272s 00:17:11.120 user 0m0.248s 00:17:11.120 sys 0m0.019s 00:17:11.120 12:48:53 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:11.120 12:48:53 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:17:11.120 ************************************ 00:17:11.120 END TEST env_memory 00:17:11.120 ************************************ 00:17:11.120 12:48:53 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:17:11.120 12:48:53 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:11.120 12:48:53 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:11.120 12:48:53 env -- common/autotest_common.sh@10 -- # set +x 00:17:11.120 ************************************ 00:17:11.120 START TEST env_vtophys 00:17:11.120 ************************************ 00:17:11.120 12:48:53 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:17:11.120 EAL: lib.eal log level changed from notice to debug 00:17:11.120 EAL: Detected lcore 0 as core 0 on socket 0 00:17:11.120 EAL: Detected lcore 1 as core 0 on socket 0 00:17:11.120 EAL: Detected lcore 2 as core 0 on socket 0 00:17:11.120 EAL: Detected lcore 3 as core 0 on socket 0 00:17:11.120 EAL: Detected lcore 4 as core 0 on socket 0 00:17:11.120 EAL: Detected lcore 5 as core 0 on socket 0 00:17:11.120 EAL: Detected lcore 6 as core 0 on socket 0 00:17:11.120 EAL: Detected lcore 7 as core 0 on socket 0 00:17:11.120 EAL: Detected lcore 8 as core 0 on socket 0 00:17:11.120 EAL: Detected lcore 9 as core 0 on socket 0 00:17:11.120 EAL: Maximum logical cores by configuration: 128 00:17:11.120 EAL: Detected CPU lcores: 10 00:17:11.120 EAL: Detected NUMA nodes: 1 00:17:11.120 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:17:11.120 EAL: Detected shared linkage of DPDK 00:17:11.120 EAL: No shared files mode enabled, IPC will be disabled 00:17:11.377 EAL: Selected IOVA mode 'PA' 00:17:11.377 EAL: Probing VFIO support... 00:17:11.377 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:17:11.377 EAL: VFIO modules not loaded, skipping VFIO support... 00:17:11.377 EAL: Ask a virtual area of 0x2e000 bytes 00:17:11.377 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:17:11.377 EAL: Setting up physically contiguous memory... 00:17:11.377 EAL: Setting maximum number of open files to 524288 00:17:11.377 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:17:11.377 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:17:11.377 EAL: Ask a virtual area of 0x61000 bytes 00:17:11.377 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:17:11.377 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:17:11.377 EAL: Ask a virtual area of 0x400000000 bytes 00:17:11.377 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:17:11.377 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:17:11.377 EAL: Ask a virtual area of 0x61000 bytes 00:17:11.377 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:17:11.377 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:17:11.377 EAL: Ask a virtual area of 0x400000000 bytes 00:17:11.377 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:17:11.377 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:17:11.377 EAL: Ask a virtual area of 0x61000 bytes 00:17:11.377 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:17:11.377 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:17:11.377 EAL: Ask a virtual area of 0x400000000 bytes 00:17:11.377 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:17:11.377 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:17:11.377 EAL: Ask a virtual area of 0x61000 bytes 00:17:11.377 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:17:11.377 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:17:11.377 EAL: Ask a virtual area of 0x400000000 bytes 00:17:11.377 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:17:11.377 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:17:11.377 EAL: Hugepages will be freed exactly as allocated. 00:17:11.377 EAL: No shared files mode enabled, IPC is disabled 00:17:11.377 EAL: No shared files mode enabled, IPC is disabled 00:17:11.377 EAL: TSC frequency is ~2600000 KHz 00:17:11.377 EAL: Main lcore 0 is ready (tid=7f59e26e6a40;cpuset=[0]) 00:17:11.377 EAL: Trying to obtain current memory policy. 00:17:11.377 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:11.377 EAL: Restoring previous memory policy: 0 00:17:11.377 EAL: request: mp_malloc_sync 00:17:11.377 EAL: No shared files mode enabled, IPC is disabled 00:17:11.377 EAL: Heap on socket 0 was expanded by 2MB 00:17:11.377 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:17:11.377 EAL: No PCI address specified using 'addr=' in: bus=pci 00:17:11.377 EAL: Mem event callback 'spdk:(nil)' registered 00:17:11.377 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:17:11.377 00:17:11.377 00:17:11.377 CUnit - A unit testing framework for C - Version 2.1-3 00:17:11.377 http://cunit.sourceforge.net/ 00:17:11.377 00:17:11.377 00:17:11.377 Suite: components_suite 00:17:11.634 Test: vtophys_malloc_test ...passed 00:17:11.635 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:17:11.635 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:11.635 EAL: Restoring previous memory policy: 4 00:17:11.635 EAL: Calling mem event callback 'spdk:(nil)' 00:17:11.635 EAL: request: mp_malloc_sync 00:17:11.635 EAL: No shared files mode enabled, IPC is disabled 00:17:11.635 EAL: Heap on socket 0 was expanded by 4MB 00:17:11.635 EAL: Calling mem event callback 'spdk:(nil)' 00:17:11.635 EAL: request: mp_malloc_sync 00:17:11.635 EAL: No shared files mode enabled, IPC is disabled 00:17:11.635 EAL: Heap on socket 0 was shrunk by 4MB 00:17:11.635 EAL: Trying to obtain current memory policy. 00:17:11.635 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:11.635 EAL: Restoring previous memory policy: 4 00:17:11.635 EAL: Calling mem event callback 'spdk:(nil)' 00:17:11.635 EAL: request: mp_malloc_sync 00:17:11.635 EAL: No shared files mode enabled, IPC is disabled 00:17:11.635 EAL: Heap on socket 0 was expanded by 6MB 00:17:11.635 EAL: Calling mem event callback 'spdk:(nil)' 00:17:11.635 EAL: request: mp_malloc_sync 00:17:11.635 EAL: No shared files mode enabled, IPC is disabled 00:17:11.635 EAL: Heap on socket 0 was shrunk by 6MB 00:17:11.635 EAL: Trying to obtain current memory policy. 00:17:11.635 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:11.635 EAL: Restoring previous memory policy: 4 00:17:11.635 EAL: Calling mem event callback 'spdk:(nil)' 00:17:11.635 EAL: request: mp_malloc_sync 00:17:11.635 EAL: No shared files mode enabled, IPC is disabled 00:17:11.635 EAL: Heap on socket 0 was expanded by 10MB 00:17:11.635 EAL: Calling mem event callback 'spdk:(nil)' 00:17:11.635 EAL: request: mp_malloc_sync 00:17:11.635 EAL: No shared files mode enabled, IPC is disabled 00:17:11.635 EAL: Heap on socket 0 was shrunk by 10MB 00:17:11.635 EAL: Trying to obtain current memory policy. 00:17:11.635 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:11.635 EAL: Restoring previous memory policy: 4 00:17:11.635 EAL: Calling mem event callback 'spdk:(nil)' 00:17:11.635 EAL: request: mp_malloc_sync 00:17:11.635 EAL: No shared files mode enabled, IPC is disabled 00:17:11.635 EAL: Heap on socket 0 was expanded by 18MB 00:17:11.635 EAL: Calling mem event callback 'spdk:(nil)' 00:17:11.635 EAL: request: mp_malloc_sync 00:17:11.635 EAL: No shared files mode enabled, IPC is disabled 00:17:11.635 EAL: Heap on socket 0 was shrunk by 18MB 00:17:11.892 EAL: Trying to obtain current memory policy. 00:17:11.892 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:11.892 EAL: Restoring previous memory policy: 4 00:17:11.892 EAL: Calling mem event callback 'spdk:(nil)' 00:17:11.892 EAL: request: mp_malloc_sync 00:17:11.892 EAL: No shared files mode enabled, IPC is disabled 00:17:11.892 EAL: Heap on socket 0 was expanded by 34MB 00:17:11.892 EAL: Calling mem event callback 'spdk:(nil)' 00:17:11.892 EAL: request: mp_malloc_sync 00:17:11.892 EAL: No shared files mode enabled, IPC is disabled 00:17:11.892 EAL: Heap on socket 0 was shrunk by 34MB 00:17:11.892 EAL: Trying to obtain current memory policy. 00:17:11.892 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:11.892 EAL: Restoring previous memory policy: 4 00:17:11.892 EAL: Calling mem event callback 'spdk:(nil)' 00:17:11.892 EAL: request: mp_malloc_sync 00:17:11.892 EAL: No shared files mode enabled, IPC is disabled 00:17:11.892 EAL: Heap on socket 0 was expanded by 66MB 00:17:11.892 EAL: Calling mem event callback 'spdk:(nil)' 00:17:11.892 EAL: request: mp_malloc_sync 00:17:11.892 EAL: No shared files mode enabled, IPC is disabled 00:17:11.892 EAL: Heap on socket 0 was shrunk by 66MB 00:17:11.892 EAL: Trying to obtain current memory policy. 00:17:11.892 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:12.148 EAL: Restoring previous memory policy: 4 00:17:12.148 EAL: Calling mem event callback 'spdk:(nil)' 00:17:12.148 EAL: request: mp_malloc_sync 00:17:12.149 EAL: No shared files mode enabled, IPC is disabled 00:17:12.149 EAL: Heap on socket 0 was expanded by 130MB 00:17:12.149 EAL: Calling mem event callback 'spdk:(nil)' 00:17:12.405 EAL: request: mp_malloc_sync 00:17:12.405 EAL: No shared files mode enabled, IPC is disabled 00:17:12.405 EAL: Heap on socket 0 was shrunk by 130MB 00:17:12.405 EAL: Trying to obtain current memory policy. 00:17:12.405 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:12.405 EAL: Restoring previous memory policy: 4 00:17:12.405 EAL: Calling mem event callback 'spdk:(nil)' 00:17:12.405 EAL: request: mp_malloc_sync 00:17:12.405 EAL: No shared files mode enabled, IPC is disabled 00:17:12.405 EAL: Heap on socket 0 was expanded by 258MB 00:17:12.662 EAL: Calling mem event callback 'spdk:(nil)' 00:17:12.662 EAL: request: mp_malloc_sync 00:17:12.662 EAL: No shared files mode enabled, IPC is disabled 00:17:12.662 EAL: Heap on socket 0 was shrunk by 258MB 00:17:12.919 EAL: Trying to obtain current memory policy. 00:17:12.919 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:13.176 EAL: Restoring previous memory policy: 4 00:17:13.176 EAL: Calling mem event callback 'spdk:(nil)' 00:17:13.176 EAL: request: mp_malloc_sync 00:17:13.176 EAL: No shared files mode enabled, IPC is disabled 00:17:13.176 EAL: Heap on socket 0 was expanded by 514MB 00:17:13.739 EAL: Calling mem event callback 'spdk:(nil)' 00:17:13.739 EAL: request: mp_malloc_sync 00:17:13.739 EAL: No shared files mode enabled, IPC is disabled 00:17:13.739 EAL: Heap on socket 0 was shrunk by 514MB 00:17:14.304 EAL: Trying to obtain current memory policy. 00:17:14.304 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:14.304 EAL: Restoring previous memory policy: 4 00:17:14.304 EAL: Calling mem event callback 'spdk:(nil)' 00:17:14.304 EAL: request: mp_malloc_sync 00:17:14.304 EAL: No shared files mode enabled, IPC is disabled 00:17:14.304 EAL: Heap on socket 0 was expanded by 1026MB 00:17:15.677 EAL: Calling mem event callback 'spdk:(nil)' 00:17:15.677 EAL: request: mp_malloc_sync 00:17:15.677 EAL: No shared files mode enabled, IPC is disabled 00:17:15.677 EAL: Heap on socket 0 was shrunk by 1026MB 00:17:16.674 passed 00:17:16.674 00:17:16.674 Run Summary: Type Total Ran Passed Failed Inactive 00:17:16.674 suites 1 1 n/a 0 0 00:17:16.674 tests 2 2 2 0 0 00:17:16.674 asserts 5747 5747 5747 0 n/a 00:17:16.674 00:17:16.674 Elapsed time = 5.193 seconds 00:17:16.674 EAL: Calling mem event callback 'spdk:(nil)' 00:17:16.674 EAL: request: mp_malloc_sync 00:17:16.674 EAL: No shared files mode enabled, IPC is disabled 00:17:16.674 EAL: Heap on socket 0 was shrunk by 2MB 00:17:16.674 EAL: No shared files mode enabled, IPC is disabled 00:17:16.674 EAL: No shared files mode enabled, IPC is disabled 00:17:16.674 EAL: No shared files mode enabled, IPC is disabled 00:17:16.674 00:17:16.674 real 0m5.456s 00:17:16.674 user 0m4.651s 00:17:16.674 sys 0m0.657s 00:17:16.674 12:48:59 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:16.674 12:48:59 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:17:16.674 ************************************ 00:17:16.674 END TEST env_vtophys 00:17:16.674 ************************************ 00:17:16.674 12:48:59 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:17:16.674 12:48:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:16.674 12:48:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:16.674 12:48:59 env -- common/autotest_common.sh@10 -- # set +x 00:17:16.674 ************************************ 00:17:16.674 START TEST env_pci 00:17:16.674 ************************************ 00:17:16.674 12:48:59 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:17:16.674 00:17:16.674 00:17:16.674 CUnit - A unit testing framework for C - Version 2.1-3 00:17:16.674 http://cunit.sourceforge.net/ 00:17:16.674 00:17:16.674 00:17:16.674 Suite: pci 00:17:16.674 Test: pci_hook ...[2024-12-05 12:48:59.169169] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 55936 has claimed it 00:17:16.674 passed 00:17:16.674 00:17:16.674 EAL: Cannot find device (10000:00:01.0) 00:17:16.674 EAL: Failed to attach device on primary process 00:17:16.674 Run Summary: Type Total Ran Passed Failed Inactive 00:17:16.674 suites 1 1 n/a 0 0 00:17:16.674 tests 1 1 1 0 0 00:17:16.674 asserts 25 25 25 0 n/a 00:17:16.675 00:17:16.675 Elapsed time = 0.003 seconds 00:17:16.675 00:17:16.675 real 0m0.063s 00:17:16.675 user 0m0.030s 00:17:16.675 sys 0m0.033s 00:17:16.675 12:48:59 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:16.675 12:48:59 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:17:16.675 ************************************ 00:17:16.675 END TEST env_pci 00:17:16.675 ************************************ 00:17:16.675 12:48:59 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:17:16.675 12:48:59 env -- env/env.sh@15 -- # uname 00:17:16.675 12:48:59 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:17:16.675 12:48:59 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:17:16.675 12:48:59 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:17:16.675 12:48:59 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:16.675 12:48:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:16.675 12:48:59 env -- common/autotest_common.sh@10 -- # set +x 00:17:16.675 ************************************ 00:17:16.675 START TEST env_dpdk_post_init 00:17:16.675 ************************************ 00:17:16.675 12:48:59 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:17:16.938 EAL: Detected CPU lcores: 10 00:17:16.938 EAL: Detected NUMA nodes: 1 00:17:16.938 EAL: Detected shared linkage of DPDK 00:17:16.938 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:17:16.938 EAL: Selected IOVA mode 'PA' 00:17:16.938 TELEMETRY: No legacy callbacks, legacy socket not created 00:17:16.938 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:17:16.938 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:17:16.938 Starting DPDK initialization... 00:17:16.938 Starting SPDK post initialization... 00:17:16.938 SPDK NVMe probe 00:17:16.938 Attaching to 0000:00:10.0 00:17:16.938 Attaching to 0000:00:11.0 00:17:16.938 Attached to 0000:00:10.0 00:17:16.938 Attached to 0000:00:11.0 00:17:16.938 Cleaning up... 00:17:16.938 00:17:16.938 real 0m0.221s 00:17:16.938 user 0m0.062s 00:17:16.938 sys 0m0.057s 00:17:16.938 12:48:59 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:16.938 12:48:59 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:17:16.938 ************************************ 00:17:16.938 END TEST env_dpdk_post_init 00:17:16.938 ************************************ 00:17:16.938 12:48:59 env -- env/env.sh@26 -- # uname 00:17:16.938 12:48:59 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:17:16.938 12:48:59 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:17:16.938 12:48:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:16.938 12:48:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:16.938 12:48:59 env -- common/autotest_common.sh@10 -- # set +x 00:17:16.938 ************************************ 00:17:16.938 START TEST env_mem_callbacks 00:17:16.938 ************************************ 00:17:16.938 12:48:59 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:17:17.200 EAL: Detected CPU lcores: 10 00:17:17.200 EAL: Detected NUMA nodes: 1 00:17:17.200 EAL: Detected shared linkage of DPDK 00:17:17.200 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:17:17.200 EAL: Selected IOVA mode 'PA' 00:17:17.200 00:17:17.200 00:17:17.200 CUnit - A unit testing framework for C - Version 2.1-3 00:17:17.200 http://cunit.sourceforge.net/ 00:17:17.200 00:17:17.200 00:17:17.200 Suite: memory 00:17:17.200 Test: test ...TELEMETRY: No legacy callbacks, legacy socket not created 00:17:17.200 00:17:17.200 register 0x200000200000 2097152 00:17:17.200 malloc 3145728 00:17:17.200 register 0x200000400000 4194304 00:17:17.200 buf 0x2000004fffc0 len 3145728 PASSED 00:17:17.200 malloc 64 00:17:17.200 buf 0x2000004ffec0 len 64 PASSED 00:17:17.200 malloc 4194304 00:17:17.200 register 0x200000800000 6291456 00:17:17.200 buf 0x2000009fffc0 len 4194304 PASSED 00:17:17.200 free 0x2000004fffc0 3145728 00:17:17.200 free 0x2000004ffec0 64 00:17:17.200 unregister 0x200000400000 4194304 PASSED 00:17:17.200 free 0x2000009fffc0 4194304 00:17:17.200 unregister 0x200000800000 6291456 PASSED 00:17:17.200 malloc 8388608 00:17:17.200 register 0x200000400000 10485760 00:17:17.200 buf 0x2000005fffc0 len 8388608 PASSED 00:17:17.200 free 0x2000005fffc0 8388608 00:17:17.200 unregister 0x200000400000 10485760 PASSED 00:17:17.200 passed 00:17:17.200 00:17:17.200 Run Summary: Type Total Ran Passed Failed Inactive 00:17:17.200 suites 1 1 n/a 0 0 00:17:17.200 tests 1 1 1 0 0 00:17:17.200 asserts 15 15 15 0 n/a 00:17:17.200 00:17:17.200 Elapsed time = 0.042 seconds 00:17:17.200 00:17:17.200 real 0m0.198s 00:17:17.200 user 0m0.059s 00:17:17.200 sys 0m0.038s 00:17:17.200 12:48:59 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:17.200 12:48:59 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:17:17.200 ************************************ 00:17:17.200 END TEST env_mem_callbacks 00:17:17.200 ************************************ 00:17:17.200 00:17:17.200 real 0m6.556s 00:17:17.200 user 0m5.180s 00:17:17.200 sys 0m1.012s 00:17:17.200 12:48:59 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:17.200 12:48:59 env -- common/autotest_common.sh@10 -- # set +x 00:17:17.200 ************************************ 00:17:17.200 END TEST env 00:17:17.200 ************************************ 00:17:17.200 12:48:59 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:17:17.200 12:48:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:17.200 12:48:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:17.200 12:48:59 -- common/autotest_common.sh@10 -- # set +x 00:17:17.460 ************************************ 00:17:17.460 START TEST rpc 00:17:17.460 ************************************ 00:17:17.460 12:48:59 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:17:17.460 * Looking for test storage... 00:17:17.460 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:17:17.460 12:48:59 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:17.460 12:48:59 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:17:17.460 12:48:59 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:17.460 12:48:59 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:17.460 12:48:59 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:17.460 12:48:59 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:17.460 12:48:59 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:17.460 12:48:59 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:17.460 12:48:59 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:17.460 12:48:59 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:17.460 12:48:59 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:17.460 12:48:59 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:17.461 12:48:59 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:17.461 12:48:59 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:17.461 12:48:59 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:17.461 12:48:59 rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:17.461 12:48:59 rpc -- scripts/common.sh@345 -- # : 1 00:17:17.461 12:48:59 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:17.461 12:48:59 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:17.461 12:48:59 rpc -- scripts/common.sh@365 -- # decimal 1 00:17:17.461 12:48:59 rpc -- scripts/common.sh@353 -- # local d=1 00:17:17.461 12:48:59 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:17.461 12:48:59 rpc -- scripts/common.sh@355 -- # echo 1 00:17:17.461 12:48:59 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:17.461 12:48:59 rpc -- scripts/common.sh@366 -- # decimal 2 00:17:17.461 12:48:59 rpc -- scripts/common.sh@353 -- # local d=2 00:17:17.461 12:48:59 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:17.461 12:48:59 rpc -- scripts/common.sh@355 -- # echo 2 00:17:17.461 12:48:59 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:17.461 12:48:59 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:17.461 12:48:59 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:17.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.461 12:48:59 rpc -- scripts/common.sh@368 -- # return 0 00:17:17.461 12:48:59 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:17.461 12:48:59 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:17.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.461 --rc genhtml_branch_coverage=1 00:17:17.461 --rc genhtml_function_coverage=1 00:17:17.461 --rc genhtml_legend=1 00:17:17.461 --rc geninfo_all_blocks=1 00:17:17.461 --rc geninfo_unexecuted_blocks=1 00:17:17.461 00:17:17.461 ' 00:17:17.461 12:48:59 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:17.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.461 --rc genhtml_branch_coverage=1 00:17:17.461 --rc genhtml_function_coverage=1 00:17:17.461 --rc genhtml_legend=1 00:17:17.461 --rc geninfo_all_blocks=1 00:17:17.461 --rc geninfo_unexecuted_blocks=1 00:17:17.461 00:17:17.461 ' 00:17:17.461 12:48:59 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:17.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.461 --rc genhtml_branch_coverage=1 00:17:17.461 --rc genhtml_function_coverage=1 00:17:17.461 --rc genhtml_legend=1 00:17:17.461 --rc geninfo_all_blocks=1 00:17:17.461 --rc geninfo_unexecuted_blocks=1 00:17:17.461 00:17:17.461 ' 00:17:17.461 12:48:59 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:17.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.461 --rc genhtml_branch_coverage=1 00:17:17.461 --rc genhtml_function_coverage=1 00:17:17.461 --rc genhtml_legend=1 00:17:17.461 --rc geninfo_all_blocks=1 00:17:17.461 --rc geninfo_unexecuted_blocks=1 00:17:17.461 00:17:17.461 ' 00:17:17.461 12:48:59 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56058 00:17:17.461 12:48:59 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:17:17.461 12:48:59 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56058 00:17:17.461 12:48:59 rpc -- common/autotest_common.sh@835 -- # '[' -z 56058 ']' 00:17:17.461 12:48:59 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.461 12:48:59 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:17:17.461 12:48:59 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:17.461 12:48:59 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.461 12:48:59 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:17.461 12:48:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.461 [2024-12-05 12:48:59.993635] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:17:17.461 [2024-12-05 12:48:59.993758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56058 ] 00:17:17.720 [2024-12-05 12:49:00.153535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.720 [2024-12-05 12:49:00.252882] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:17:17.720 [2024-12-05 12:49:00.252941] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56058' to capture a snapshot of events at runtime. 00:17:17.720 [2024-12-05 12:49:00.252951] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:17.720 [2024-12-05 12:49:00.252961] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:17.720 [2024-12-05 12:49:00.252969] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56058 for offline analysis/debug. 00:17:17.720 [2024-12-05 12:49:00.253834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.657 12:49:00 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:18.657 12:49:00 rpc -- common/autotest_common.sh@868 -- # return 0 00:17:18.657 12:49:00 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:17:18.657 12:49:00 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:17:18.657 12:49:00 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:17:18.657 12:49:00 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:17:18.657 12:49:00 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:18.657 12:49:00 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:18.657 12:49:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.657 ************************************ 00:17:18.657 START TEST rpc_integrity 00:17:18.657 ************************************ 00:17:18.657 12:49:00 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:17:18.657 12:49:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:18.657 12:49:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.657 12:49:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:18.657 12:49:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.657 12:49:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:17:18.657 12:49:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:17:18.657 12:49:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:17:18.657 12:49:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:17:18.657 12:49:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.657 12:49:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:18.657 12:49:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.657 12:49:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:17:18.657 12:49:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:17:18.657 12:49:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.657 12:49:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:18.657 12:49:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.657 12:49:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:17:18.657 { 00:17:18.657 "name": "Malloc0", 00:17:18.657 "aliases": [ 00:17:18.657 "00447365-e5c7-4433-b8d3-ab63e842e00e" 00:17:18.657 ], 00:17:18.657 "product_name": "Malloc disk", 00:17:18.657 "block_size": 512, 00:17:18.657 "num_blocks": 16384, 00:17:18.657 "uuid": "00447365-e5c7-4433-b8d3-ab63e842e00e", 00:17:18.657 "assigned_rate_limits": { 00:17:18.657 "rw_ios_per_sec": 0, 00:17:18.657 "rw_mbytes_per_sec": 0, 00:17:18.657 "r_mbytes_per_sec": 0, 00:17:18.657 "w_mbytes_per_sec": 0 00:17:18.657 }, 00:17:18.657 "claimed": false, 00:17:18.657 "zoned": false, 00:17:18.657 "supported_io_types": { 00:17:18.657 "read": true, 00:17:18.657 "write": true, 00:17:18.657 "unmap": true, 00:17:18.657 "flush": true, 00:17:18.657 "reset": true, 00:17:18.657 "nvme_admin": false, 00:17:18.657 "nvme_io": false, 00:17:18.657 "nvme_io_md": false, 00:17:18.657 "write_zeroes": true, 00:17:18.657 "zcopy": true, 00:17:18.657 "get_zone_info": false, 00:17:18.657 "zone_management": false, 00:17:18.657 "zone_append": false, 00:17:18.657 "compare": false, 00:17:18.657 "compare_and_write": false, 00:17:18.657 "abort": true, 00:17:18.657 "seek_hole": false, 00:17:18.657 "seek_data": false, 00:17:18.657 "copy": true, 00:17:18.657 "nvme_iov_md": false 00:17:18.657 }, 00:17:18.657 "memory_domains": [ 00:17:18.657 { 00:17:18.657 "dma_device_id": "system", 00:17:18.657 "dma_device_type": 1 00:17:18.657 }, 00:17:18.657 { 00:17:18.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.657 "dma_device_type": 2 00:17:18.657 } 00:17:18.657 ], 00:17:18.657 "driver_specific": {} 00:17:18.657 } 00:17:18.657 ]' 00:17:18.657 12:49:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:17:18.657 12:49:01 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:17:18.657 12:49:01 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:17:18.657 12:49:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.657 12:49:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:18.657 [2024-12-05 12:49:01.028037] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:17:18.657 [2024-12-05 12:49:01.028093] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.657 [2024-12-05 12:49:01.028113] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:17:18.657 [2024-12-05 12:49:01.028126] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.657 [2024-12-05 12:49:01.030287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.657 [2024-12-05 12:49:01.030327] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:17:18.657 Passthru0 00:17:18.657 12:49:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.657 12:49:01 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:17:18.657 12:49:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.657 12:49:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:18.657 12:49:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.657 12:49:01 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:17:18.657 { 00:17:18.657 "name": "Malloc0", 00:17:18.657 "aliases": [ 00:17:18.657 "00447365-e5c7-4433-b8d3-ab63e842e00e" 00:17:18.657 ], 00:17:18.657 "product_name": "Malloc disk", 00:17:18.657 "block_size": 512, 00:17:18.657 "num_blocks": 16384, 00:17:18.657 "uuid": "00447365-e5c7-4433-b8d3-ab63e842e00e", 00:17:18.657 "assigned_rate_limits": { 00:17:18.657 "rw_ios_per_sec": 0, 00:17:18.657 "rw_mbytes_per_sec": 0, 00:17:18.657 "r_mbytes_per_sec": 0, 00:17:18.657 "w_mbytes_per_sec": 0 00:17:18.657 }, 00:17:18.657 "claimed": true, 00:17:18.657 "claim_type": "exclusive_write", 00:17:18.657 "zoned": false, 00:17:18.657 "supported_io_types": { 00:17:18.657 "read": true, 00:17:18.657 "write": true, 00:17:18.657 "unmap": true, 00:17:18.657 "flush": true, 00:17:18.657 "reset": true, 00:17:18.657 "nvme_admin": false, 00:17:18.657 "nvme_io": false, 00:17:18.657 "nvme_io_md": false, 00:17:18.657 "write_zeroes": true, 00:17:18.657 "zcopy": true, 00:17:18.657 "get_zone_info": false, 00:17:18.657 "zone_management": false, 00:17:18.657 "zone_append": false, 00:17:18.657 "compare": false, 00:17:18.657 "compare_and_write": false, 00:17:18.657 "abort": true, 00:17:18.657 "seek_hole": false, 00:17:18.657 "seek_data": false, 00:17:18.657 "copy": true, 00:17:18.657 "nvme_iov_md": false 00:17:18.657 }, 00:17:18.657 "memory_domains": [ 00:17:18.657 { 00:17:18.657 "dma_device_id": "system", 00:17:18.657 "dma_device_type": 1 00:17:18.657 }, 00:17:18.657 { 00:17:18.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.657 "dma_device_type": 2 00:17:18.657 } 00:17:18.657 ], 00:17:18.657 "driver_specific": {} 00:17:18.657 }, 00:17:18.657 { 00:17:18.657 "name": "Passthru0", 00:17:18.657 "aliases": [ 00:17:18.657 "0c450624-074c-5b1d-8559-7279313f0c0b" 00:17:18.657 ], 00:17:18.657 "product_name": "passthru", 00:17:18.657 "block_size": 512, 00:17:18.657 "num_blocks": 16384, 00:17:18.657 "uuid": "0c450624-074c-5b1d-8559-7279313f0c0b", 00:17:18.657 "assigned_rate_limits": { 00:17:18.657 "rw_ios_per_sec": 0, 00:17:18.657 "rw_mbytes_per_sec": 0, 00:17:18.657 "r_mbytes_per_sec": 0, 00:17:18.657 "w_mbytes_per_sec": 0 00:17:18.657 }, 00:17:18.657 "claimed": false, 00:17:18.657 "zoned": false, 00:17:18.657 "supported_io_types": { 00:17:18.657 "read": true, 00:17:18.657 "write": true, 00:17:18.657 "unmap": true, 00:17:18.657 "flush": true, 00:17:18.657 "reset": true, 00:17:18.657 "nvme_admin": false, 00:17:18.657 "nvme_io": false, 00:17:18.657 "nvme_io_md": false, 00:17:18.657 "write_zeroes": true, 00:17:18.657 "zcopy": true, 00:17:18.657 "get_zone_info": false, 00:17:18.657 "zone_management": false, 00:17:18.657 "zone_append": false, 00:17:18.657 "compare": false, 00:17:18.657 "compare_and_write": false, 00:17:18.657 "abort": true, 00:17:18.657 "seek_hole": false, 00:17:18.657 "seek_data": false, 00:17:18.657 "copy": true, 00:17:18.657 "nvme_iov_md": false 00:17:18.657 }, 00:17:18.657 "memory_domains": [ 00:17:18.657 { 00:17:18.657 "dma_device_id": "system", 00:17:18.657 "dma_device_type": 1 00:17:18.657 }, 00:17:18.657 { 00:17:18.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.657 "dma_device_type": 2 00:17:18.657 } 00:17:18.657 ], 00:17:18.657 "driver_specific": { 00:17:18.657 "passthru": { 00:17:18.657 "name": "Passthru0", 00:17:18.657 "base_bdev_name": "Malloc0" 00:17:18.657 } 00:17:18.657 } 00:17:18.657 } 00:17:18.657 ]' 00:17:18.657 12:49:01 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:17:18.657 12:49:01 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:17:18.657 12:49:01 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:17:18.657 12:49:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.657 12:49:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:18.657 12:49:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.657 12:49:01 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:18.657 12:49:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.657 12:49:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:18.657 12:49:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.657 12:49:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:18.657 12:49:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.657 12:49:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:18.658 12:49:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.658 12:49:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:17:18.658 12:49:01 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:17:18.658 ************************************ 00:17:18.658 END TEST rpc_integrity 00:17:18.658 ************************************ 00:17:18.658 12:49:01 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:17:18.658 00:17:18.658 real 0m0.246s 00:17:18.658 user 0m0.126s 00:17:18.658 sys 0m0.033s 00:17:18.658 12:49:01 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:18.658 12:49:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:18.658 12:49:01 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:17:18.658 12:49:01 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:18.658 12:49:01 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:18.658 12:49:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.658 ************************************ 00:17:18.658 START TEST rpc_plugins 00:17:18.658 ************************************ 00:17:18.658 12:49:01 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:17:18.658 12:49:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:17:18.658 12:49:01 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.658 12:49:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:17:18.658 12:49:01 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.658 12:49:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:17:18.658 12:49:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:17:18.658 12:49:01 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.658 12:49:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:17:18.658 12:49:01 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.658 12:49:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:17:18.658 { 00:17:18.658 "name": "Malloc1", 00:17:18.658 "aliases": [ 00:17:18.658 "431c3f39-bc66-489d-a52b-b63acfd65f42" 00:17:18.658 ], 00:17:18.658 "product_name": "Malloc disk", 00:17:18.658 "block_size": 4096, 00:17:18.658 "num_blocks": 256, 00:17:18.658 "uuid": "431c3f39-bc66-489d-a52b-b63acfd65f42", 00:17:18.658 "assigned_rate_limits": { 00:17:18.658 "rw_ios_per_sec": 0, 00:17:18.658 "rw_mbytes_per_sec": 0, 00:17:18.658 "r_mbytes_per_sec": 0, 00:17:18.658 "w_mbytes_per_sec": 0 00:17:18.658 }, 00:17:18.658 "claimed": false, 00:17:18.658 "zoned": false, 00:17:18.658 "supported_io_types": { 00:17:18.658 "read": true, 00:17:18.658 "write": true, 00:17:18.658 "unmap": true, 00:17:18.658 "flush": true, 00:17:18.658 "reset": true, 00:17:18.658 "nvme_admin": false, 00:17:18.658 "nvme_io": false, 00:17:18.658 "nvme_io_md": false, 00:17:18.658 "write_zeroes": true, 00:17:18.658 "zcopy": true, 00:17:18.658 "get_zone_info": false, 00:17:18.658 "zone_management": false, 00:17:18.658 "zone_append": false, 00:17:18.658 "compare": false, 00:17:18.658 "compare_and_write": false, 00:17:18.658 "abort": true, 00:17:18.658 "seek_hole": false, 00:17:18.658 "seek_data": false, 00:17:18.658 "copy": true, 00:17:18.658 "nvme_iov_md": false 00:17:18.658 }, 00:17:18.658 "memory_domains": [ 00:17:18.658 { 00:17:18.658 "dma_device_id": "system", 00:17:18.658 "dma_device_type": 1 00:17:18.658 }, 00:17:18.658 { 00:17:18.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.658 "dma_device_type": 2 00:17:18.658 } 00:17:18.658 ], 00:17:18.658 "driver_specific": {} 00:17:18.658 } 00:17:18.658 ]' 00:17:18.658 12:49:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:17:18.915 12:49:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:17:18.915 12:49:01 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:17:18.915 12:49:01 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.915 12:49:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:17:18.915 12:49:01 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.915 12:49:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:17:18.915 12:49:01 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.915 12:49:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:17:18.915 12:49:01 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.915 12:49:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:17:18.915 12:49:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:17:18.915 12:49:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:17:18.915 00:17:18.915 real 0m0.107s 00:17:18.915 user 0m0.062s 00:17:18.915 sys 0m0.011s 00:17:18.915 ************************************ 00:17:18.915 END TEST rpc_plugins 00:17:18.915 ************************************ 00:17:18.915 12:49:01 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:18.915 12:49:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:17:18.915 12:49:01 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:17:18.915 12:49:01 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:18.915 12:49:01 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:18.915 12:49:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.915 ************************************ 00:17:18.915 START TEST rpc_trace_cmd_test 00:17:18.915 ************************************ 00:17:18.915 12:49:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:17:18.915 12:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:17:18.915 12:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:17:18.915 12:49:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.915 12:49:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.915 12:49:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.915 12:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:17:18.915 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56058", 00:17:18.915 "tpoint_group_mask": "0x8", 00:17:18.915 "iscsi_conn": { 00:17:18.915 "mask": "0x2", 00:17:18.915 "tpoint_mask": "0x0" 00:17:18.915 }, 00:17:18.915 "scsi": { 00:17:18.915 "mask": "0x4", 00:17:18.915 "tpoint_mask": "0x0" 00:17:18.915 }, 00:17:18.915 "bdev": { 00:17:18.915 "mask": "0x8", 00:17:18.915 "tpoint_mask": "0xffffffffffffffff" 00:17:18.915 }, 00:17:18.915 "nvmf_rdma": { 00:17:18.915 "mask": "0x10", 00:17:18.915 "tpoint_mask": "0x0" 00:17:18.915 }, 00:17:18.915 "nvmf_tcp": { 00:17:18.915 "mask": "0x20", 00:17:18.915 "tpoint_mask": "0x0" 00:17:18.915 }, 00:17:18.915 "ftl": { 00:17:18.915 "mask": "0x40", 00:17:18.915 "tpoint_mask": "0x0" 00:17:18.915 }, 00:17:18.915 "blobfs": { 00:17:18.915 "mask": "0x80", 00:17:18.915 "tpoint_mask": "0x0" 00:17:18.915 }, 00:17:18.915 "dsa": { 00:17:18.915 "mask": "0x200", 00:17:18.915 "tpoint_mask": "0x0" 00:17:18.915 }, 00:17:18.915 "thread": { 00:17:18.915 "mask": "0x400", 00:17:18.915 "tpoint_mask": "0x0" 00:17:18.915 }, 00:17:18.915 "nvme_pcie": { 00:17:18.915 "mask": "0x800", 00:17:18.915 "tpoint_mask": "0x0" 00:17:18.915 }, 00:17:18.915 "iaa": { 00:17:18.915 "mask": "0x1000", 00:17:18.915 "tpoint_mask": "0x0" 00:17:18.916 }, 00:17:18.916 "nvme_tcp": { 00:17:18.916 "mask": "0x2000", 00:17:18.916 "tpoint_mask": "0x0" 00:17:18.916 }, 00:17:18.916 "bdev_nvme": { 00:17:18.916 "mask": "0x4000", 00:17:18.916 "tpoint_mask": "0x0" 00:17:18.916 }, 00:17:18.916 "sock": { 00:17:18.916 "mask": "0x8000", 00:17:18.916 "tpoint_mask": "0x0" 00:17:18.916 }, 00:17:18.916 "blob": { 00:17:18.916 "mask": "0x10000", 00:17:18.916 "tpoint_mask": "0x0" 00:17:18.916 }, 00:17:18.916 "bdev_raid": { 00:17:18.916 "mask": "0x20000", 00:17:18.916 "tpoint_mask": "0x0" 00:17:18.916 }, 00:17:18.916 "scheduler": { 00:17:18.916 "mask": "0x40000", 00:17:18.916 "tpoint_mask": "0x0" 00:17:18.916 } 00:17:18.916 }' 00:17:18.916 12:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:17:18.916 12:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:17:18.916 12:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:17:18.916 12:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:17:18.916 12:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:17:18.916 12:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:17:18.916 12:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:17:18.916 12:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:17:18.916 12:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:17:19.173 ************************************ 00:17:19.173 END TEST rpc_trace_cmd_test 00:17:19.173 ************************************ 00:17:19.173 12:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:17:19.173 00:17:19.173 real 0m0.173s 00:17:19.173 user 0m0.143s 00:17:19.173 sys 0m0.021s 00:17:19.173 12:49:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:19.173 12:49:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.173 12:49:01 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:17:19.173 12:49:01 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:17:19.173 12:49:01 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:17:19.173 12:49:01 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:19.173 12:49:01 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:19.173 12:49:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.173 ************************************ 00:17:19.173 START TEST rpc_daemon_integrity 00:17:19.173 ************************************ 00:17:19.173 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:17:19.173 12:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:19.173 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.173 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:19.173 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.173 12:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:17:19.173 12:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:17:19.173 12:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:17:19.173 12:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:17:19.173 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.173 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:19.173 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.173 12:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:17:19.173 12:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:17:19.173 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.173 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:19.173 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.173 12:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:17:19.173 { 00:17:19.173 "name": "Malloc2", 00:17:19.173 "aliases": [ 00:17:19.173 "b9dcff45-ab49-4578-8bed-2b7f8cc7924a" 00:17:19.173 ], 00:17:19.173 "product_name": "Malloc disk", 00:17:19.173 "block_size": 512, 00:17:19.173 "num_blocks": 16384, 00:17:19.173 "uuid": "b9dcff45-ab49-4578-8bed-2b7f8cc7924a", 00:17:19.173 "assigned_rate_limits": { 00:17:19.173 "rw_ios_per_sec": 0, 00:17:19.173 "rw_mbytes_per_sec": 0, 00:17:19.173 "r_mbytes_per_sec": 0, 00:17:19.173 "w_mbytes_per_sec": 0 00:17:19.173 }, 00:17:19.173 "claimed": false, 00:17:19.173 "zoned": false, 00:17:19.173 "supported_io_types": { 00:17:19.173 "read": true, 00:17:19.173 "write": true, 00:17:19.173 "unmap": true, 00:17:19.173 "flush": true, 00:17:19.173 "reset": true, 00:17:19.173 "nvme_admin": false, 00:17:19.173 "nvme_io": false, 00:17:19.173 "nvme_io_md": false, 00:17:19.173 "write_zeroes": true, 00:17:19.173 "zcopy": true, 00:17:19.173 "get_zone_info": false, 00:17:19.173 "zone_management": false, 00:17:19.173 "zone_append": false, 00:17:19.173 "compare": false, 00:17:19.173 "compare_and_write": false, 00:17:19.173 "abort": true, 00:17:19.173 "seek_hole": false, 00:17:19.173 "seek_data": false, 00:17:19.173 "copy": true, 00:17:19.173 "nvme_iov_md": false 00:17:19.173 }, 00:17:19.173 "memory_domains": [ 00:17:19.173 { 00:17:19.173 "dma_device_id": "system", 00:17:19.173 "dma_device_type": 1 00:17:19.173 }, 00:17:19.173 { 00:17:19.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.173 "dma_device_type": 2 00:17:19.173 } 00:17:19.173 ], 00:17:19.173 "driver_specific": {} 00:17:19.173 } 00:17:19.173 ]' 00:17:19.173 12:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:17:19.173 12:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:17:19.173 12:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:17:19.173 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.173 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:19.173 [2024-12-05 12:49:01.659323] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:17:19.173 [2024-12-05 12:49:01.659378] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.173 [2024-12-05 12:49:01.659399] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:19.173 [2024-12-05 12:49:01.659411] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.173 [2024-12-05 12:49:01.661564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.173 [2024-12-05 12:49:01.661602] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:17:19.173 Passthru0 00:17:19.173 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.173 12:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:17:19.173 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.173 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:19.173 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.173 12:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:17:19.173 { 00:17:19.173 "name": "Malloc2", 00:17:19.173 "aliases": [ 00:17:19.173 "b9dcff45-ab49-4578-8bed-2b7f8cc7924a" 00:17:19.173 ], 00:17:19.173 "product_name": "Malloc disk", 00:17:19.173 "block_size": 512, 00:17:19.173 "num_blocks": 16384, 00:17:19.173 "uuid": "b9dcff45-ab49-4578-8bed-2b7f8cc7924a", 00:17:19.173 "assigned_rate_limits": { 00:17:19.173 "rw_ios_per_sec": 0, 00:17:19.173 "rw_mbytes_per_sec": 0, 00:17:19.173 "r_mbytes_per_sec": 0, 00:17:19.174 "w_mbytes_per_sec": 0 00:17:19.174 }, 00:17:19.174 "claimed": true, 00:17:19.174 "claim_type": "exclusive_write", 00:17:19.174 "zoned": false, 00:17:19.174 "supported_io_types": { 00:17:19.174 "read": true, 00:17:19.174 "write": true, 00:17:19.174 "unmap": true, 00:17:19.174 "flush": true, 00:17:19.174 "reset": true, 00:17:19.174 "nvme_admin": false, 00:17:19.174 "nvme_io": false, 00:17:19.174 "nvme_io_md": false, 00:17:19.174 "write_zeroes": true, 00:17:19.174 "zcopy": true, 00:17:19.174 "get_zone_info": false, 00:17:19.174 "zone_management": false, 00:17:19.174 "zone_append": false, 00:17:19.174 "compare": false, 00:17:19.174 "compare_and_write": false, 00:17:19.174 "abort": true, 00:17:19.174 "seek_hole": false, 00:17:19.174 "seek_data": false, 00:17:19.174 "copy": true, 00:17:19.174 "nvme_iov_md": false 00:17:19.174 }, 00:17:19.174 "memory_domains": [ 00:17:19.174 { 00:17:19.174 "dma_device_id": "system", 00:17:19.174 "dma_device_type": 1 00:17:19.174 }, 00:17:19.174 { 00:17:19.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.174 "dma_device_type": 2 00:17:19.174 } 00:17:19.174 ], 00:17:19.174 "driver_specific": {} 00:17:19.174 }, 00:17:19.174 { 00:17:19.174 "name": "Passthru0", 00:17:19.174 "aliases": [ 00:17:19.174 "c44bf47e-b5d5-5035-a094-2fc9566b73a9" 00:17:19.174 ], 00:17:19.174 "product_name": "passthru", 00:17:19.174 "block_size": 512, 00:17:19.174 "num_blocks": 16384, 00:17:19.174 "uuid": "c44bf47e-b5d5-5035-a094-2fc9566b73a9", 00:17:19.174 "assigned_rate_limits": { 00:17:19.174 "rw_ios_per_sec": 0, 00:17:19.174 "rw_mbytes_per_sec": 0, 00:17:19.174 "r_mbytes_per_sec": 0, 00:17:19.174 "w_mbytes_per_sec": 0 00:17:19.174 }, 00:17:19.174 "claimed": false, 00:17:19.174 "zoned": false, 00:17:19.174 "supported_io_types": { 00:17:19.174 "read": true, 00:17:19.174 "write": true, 00:17:19.174 "unmap": true, 00:17:19.174 "flush": true, 00:17:19.174 "reset": true, 00:17:19.174 "nvme_admin": false, 00:17:19.174 "nvme_io": false, 00:17:19.174 "nvme_io_md": false, 00:17:19.174 "write_zeroes": true, 00:17:19.174 "zcopy": true, 00:17:19.174 "get_zone_info": false, 00:17:19.174 "zone_management": false, 00:17:19.174 "zone_append": false, 00:17:19.174 "compare": false, 00:17:19.174 "compare_and_write": false, 00:17:19.174 "abort": true, 00:17:19.174 "seek_hole": false, 00:17:19.174 "seek_data": false, 00:17:19.174 "copy": true, 00:17:19.174 "nvme_iov_md": false 00:17:19.174 }, 00:17:19.174 "memory_domains": [ 00:17:19.174 { 00:17:19.174 "dma_device_id": "system", 00:17:19.174 "dma_device_type": 1 00:17:19.174 }, 00:17:19.174 { 00:17:19.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.174 "dma_device_type": 2 00:17:19.174 } 00:17:19.174 ], 00:17:19.174 "driver_specific": { 00:17:19.174 "passthru": { 00:17:19.174 "name": "Passthru0", 00:17:19.174 "base_bdev_name": "Malloc2" 00:17:19.174 } 00:17:19.174 } 00:17:19.174 } 00:17:19.174 ]' 00:17:19.174 12:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:17:19.174 12:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:17:19.174 12:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:17:19.174 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.174 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:19.174 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.174 12:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:17:19.174 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.174 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:19.174 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.174 12:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:19.174 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.174 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:19.431 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.431 12:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:17:19.431 12:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:17:19.431 ************************************ 00:17:19.431 END TEST rpc_daemon_integrity 00:17:19.431 ************************************ 00:17:19.431 12:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:17:19.431 00:17:19.431 real 0m0.241s 00:17:19.431 user 0m0.129s 00:17:19.431 sys 0m0.032s 00:17:19.431 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:19.431 12:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:19.431 12:49:01 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:17:19.431 12:49:01 rpc -- rpc/rpc.sh@84 -- # killprocess 56058 00:17:19.431 12:49:01 rpc -- common/autotest_common.sh@954 -- # '[' -z 56058 ']' 00:17:19.431 12:49:01 rpc -- common/autotest_common.sh@958 -- # kill -0 56058 00:17:19.431 12:49:01 rpc -- common/autotest_common.sh@959 -- # uname 00:17:19.431 12:49:01 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:19.431 12:49:01 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56058 00:17:19.431 killing process with pid 56058 00:17:19.431 12:49:01 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:19.431 12:49:01 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:19.431 12:49:01 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56058' 00:17:19.431 12:49:01 rpc -- common/autotest_common.sh@973 -- # kill 56058 00:17:19.431 12:49:01 rpc -- common/autotest_common.sh@978 -- # wait 56058 00:17:20.849 00:17:20.849 real 0m3.558s 00:17:20.849 user 0m4.051s 00:17:20.849 sys 0m0.588s 00:17:20.849 12:49:03 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:20.849 12:49:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.849 ************************************ 00:17:20.849 END TEST rpc 00:17:20.849 ************************************ 00:17:20.849 12:49:03 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:17:20.849 12:49:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:20.849 12:49:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:20.849 12:49:03 -- common/autotest_common.sh@10 -- # set +x 00:17:20.849 ************************************ 00:17:20.849 START TEST skip_rpc 00:17:20.849 ************************************ 00:17:20.849 12:49:03 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:17:21.107 * Looking for test storage... 00:17:21.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:17:21.107 12:49:03 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:21.107 12:49:03 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:17:21.107 12:49:03 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:21.107 12:49:03 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:21.107 12:49:03 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:21.107 12:49:03 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:21.107 12:49:03 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:21.107 12:49:03 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:21.107 12:49:03 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:21.107 12:49:03 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:21.107 12:49:03 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:21.107 12:49:03 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:21.107 12:49:03 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:21.107 12:49:03 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:21.107 12:49:03 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:21.107 12:49:03 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:21.107 12:49:03 skip_rpc -- scripts/common.sh@345 -- # : 1 00:17:21.107 12:49:03 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:21.107 12:49:03 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:21.107 12:49:03 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:21.107 12:49:03 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:17:21.107 12:49:03 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:21.107 12:49:03 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:17:21.107 12:49:03 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:21.107 12:49:03 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:21.107 12:49:03 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:17:21.107 12:49:03 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:21.107 12:49:03 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:17:21.107 12:49:03 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:21.107 12:49:03 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:21.107 12:49:03 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:21.107 12:49:03 skip_rpc -- scripts/common.sh@368 -- # return 0 00:17:21.107 12:49:03 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:21.107 12:49:03 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:21.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.107 --rc genhtml_branch_coverage=1 00:17:21.107 --rc genhtml_function_coverage=1 00:17:21.107 --rc genhtml_legend=1 00:17:21.107 --rc geninfo_all_blocks=1 00:17:21.107 --rc geninfo_unexecuted_blocks=1 00:17:21.107 00:17:21.107 ' 00:17:21.107 12:49:03 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:21.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.107 --rc genhtml_branch_coverage=1 00:17:21.107 --rc genhtml_function_coverage=1 00:17:21.107 --rc genhtml_legend=1 00:17:21.107 --rc geninfo_all_blocks=1 00:17:21.107 --rc geninfo_unexecuted_blocks=1 00:17:21.107 00:17:21.107 ' 00:17:21.107 12:49:03 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:21.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.107 --rc genhtml_branch_coverage=1 00:17:21.107 --rc genhtml_function_coverage=1 00:17:21.107 --rc genhtml_legend=1 00:17:21.107 --rc geninfo_all_blocks=1 00:17:21.107 --rc geninfo_unexecuted_blocks=1 00:17:21.107 00:17:21.107 ' 00:17:21.107 12:49:03 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:21.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.107 --rc genhtml_branch_coverage=1 00:17:21.107 --rc genhtml_function_coverage=1 00:17:21.107 --rc genhtml_legend=1 00:17:21.107 --rc geninfo_all_blocks=1 00:17:21.107 --rc geninfo_unexecuted_blocks=1 00:17:21.107 00:17:21.107 ' 00:17:21.107 12:49:03 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:17:21.107 12:49:03 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:17:21.107 12:49:03 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:17:21.107 12:49:03 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:21.107 12:49:03 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:21.107 12:49:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.107 ************************************ 00:17:21.107 START TEST skip_rpc 00:17:21.107 ************************************ 00:17:21.107 12:49:03 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:17:21.107 12:49:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56270 00:17:21.107 12:49:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:17:21.107 12:49:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:17:21.107 12:49:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:17:21.107 [2024-12-05 12:49:03.608420] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:17:21.107 [2024-12-05 12:49:03.608544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56270 ] 00:17:21.366 [2024-12-05 12:49:03.766297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.366 [2024-12-05 12:49:03.865730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.654 12:49:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:17:26.654 12:49:08 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:26.654 12:49:08 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:17:26.654 12:49:08 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:26.654 12:49:08 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:26.654 12:49:08 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:26.654 12:49:08 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:26.654 12:49:08 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:17:26.654 12:49:08 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.654 12:49:08 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.654 12:49:08 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:26.655 12:49:08 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:26.655 12:49:08 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:26.655 12:49:08 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:26.655 12:49:08 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:26.655 12:49:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:17:26.655 12:49:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56270 00:17:26.655 12:49:08 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 56270 ']' 00:17:26.655 12:49:08 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 56270 00:17:26.655 12:49:08 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:17:26.655 12:49:08 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:26.655 12:49:08 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56270 00:17:26.655 12:49:08 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:26.655 12:49:08 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:26.655 killing process with pid 56270 00:17:26.655 12:49:08 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56270' 00:17:26.655 12:49:08 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 56270 00:17:26.655 12:49:08 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 56270 00:17:27.221 00:17:27.221 real 0m6.225s 00:17:27.221 user 0m5.854s 00:17:27.221 sys 0m0.268s 00:17:27.221 12:49:09 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:27.221 12:49:09 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.221 ************************************ 00:17:27.221 END TEST skip_rpc 00:17:27.221 ************************************ 00:17:27.221 12:49:09 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:17:27.221 12:49:09 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:27.221 12:49:09 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:27.221 12:49:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.221 ************************************ 00:17:27.221 START TEST skip_rpc_with_json 00:17:27.221 ************************************ 00:17:27.221 12:49:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:17:27.221 12:49:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:17:27.221 12:49:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=56363 00:17:27.221 12:49:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:17:27.221 12:49:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:17:27.221 12:49:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 56363 00:17:27.221 12:49:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 56363 ']' 00:17:27.221 12:49:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.221 12:49:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:27.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.221 12:49:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.479 12:49:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:27.479 12:49:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:17:27.479 [2024-12-05 12:49:09.881020] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:17:27.479 [2024-12-05 12:49:09.881152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56363 ] 00:17:27.479 [2024-12-05 12:49:10.041480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.737 [2024-12-05 12:49:10.127472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.345 12:49:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:28.345 12:49:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:17:28.345 12:49:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:17:28.345 12:49:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.345 12:49:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:17:28.345 [2024-12-05 12:49:10.674974] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:17:28.345 request: 00:17:28.345 { 00:17:28.345 "trtype": "tcp", 00:17:28.345 "method": "nvmf_get_transports", 00:17:28.345 "req_id": 1 00:17:28.345 } 00:17:28.345 Got JSON-RPC error response 00:17:28.345 response: 00:17:28.345 { 00:17:28.345 "code": -19, 00:17:28.345 "message": "No such device" 00:17:28.345 } 00:17:28.345 12:49:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:28.345 12:49:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:17:28.345 12:49:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.345 12:49:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:17:28.345 [2024-12-05 12:49:10.687066] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:28.345 12:49:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.345 12:49:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:17:28.345 12:49:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.345 12:49:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:17:28.345 12:49:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.345 12:49:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:17:28.345 { 00:17:28.345 "subsystems": [ 00:17:28.345 { 00:17:28.345 "subsystem": "fsdev", 00:17:28.345 "config": [ 00:17:28.345 { 00:17:28.345 "method": "fsdev_set_opts", 00:17:28.345 "params": { 00:17:28.345 "fsdev_io_pool_size": 65535, 00:17:28.345 "fsdev_io_cache_size": 256 00:17:28.345 } 00:17:28.345 } 00:17:28.345 ] 00:17:28.345 }, 00:17:28.345 { 00:17:28.345 "subsystem": "keyring", 00:17:28.345 "config": [] 00:17:28.345 }, 00:17:28.345 { 00:17:28.345 "subsystem": "iobuf", 00:17:28.345 "config": [ 00:17:28.345 { 00:17:28.345 "method": "iobuf_set_options", 00:17:28.345 "params": { 00:17:28.345 "small_pool_count": 8192, 00:17:28.345 "large_pool_count": 1024, 00:17:28.345 "small_bufsize": 8192, 00:17:28.345 "large_bufsize": 135168, 00:17:28.345 "enable_numa": false 00:17:28.345 } 00:17:28.345 } 00:17:28.345 ] 00:17:28.345 }, 00:17:28.345 { 00:17:28.345 "subsystem": "sock", 00:17:28.345 "config": [ 00:17:28.345 { 00:17:28.345 "method": "sock_set_default_impl", 00:17:28.345 "params": { 00:17:28.345 "impl_name": "posix" 00:17:28.345 } 00:17:28.345 }, 00:17:28.345 { 00:17:28.345 "method": "sock_impl_set_options", 00:17:28.345 "params": { 00:17:28.345 "impl_name": "ssl", 00:17:28.345 "recv_buf_size": 4096, 00:17:28.345 "send_buf_size": 4096, 00:17:28.345 "enable_recv_pipe": true, 00:17:28.345 "enable_quickack": false, 00:17:28.345 "enable_placement_id": 0, 00:17:28.345 "enable_zerocopy_send_server": true, 00:17:28.345 "enable_zerocopy_send_client": false, 00:17:28.345 "zerocopy_threshold": 0, 00:17:28.345 "tls_version": 0, 00:17:28.345 "enable_ktls": false 00:17:28.345 } 00:17:28.345 }, 00:17:28.345 { 00:17:28.345 "method": "sock_impl_set_options", 00:17:28.345 "params": { 00:17:28.345 "impl_name": "posix", 00:17:28.345 "recv_buf_size": 2097152, 00:17:28.345 "send_buf_size": 2097152, 00:17:28.345 "enable_recv_pipe": true, 00:17:28.345 "enable_quickack": false, 00:17:28.345 "enable_placement_id": 0, 00:17:28.345 "enable_zerocopy_send_server": true, 00:17:28.345 "enable_zerocopy_send_client": false, 00:17:28.345 "zerocopy_threshold": 0, 00:17:28.345 "tls_version": 0, 00:17:28.345 "enable_ktls": false 00:17:28.345 } 00:17:28.345 } 00:17:28.345 ] 00:17:28.345 }, 00:17:28.345 { 00:17:28.345 "subsystem": "vmd", 00:17:28.345 "config": [] 00:17:28.345 }, 00:17:28.345 { 00:17:28.345 "subsystem": "accel", 00:17:28.345 "config": [ 00:17:28.345 { 00:17:28.345 "method": "accel_set_options", 00:17:28.345 "params": { 00:17:28.345 "small_cache_size": 128, 00:17:28.345 "large_cache_size": 16, 00:17:28.345 "task_count": 2048, 00:17:28.345 "sequence_count": 2048, 00:17:28.345 "buf_count": 2048 00:17:28.345 } 00:17:28.345 } 00:17:28.345 ] 00:17:28.345 }, 00:17:28.345 { 00:17:28.345 "subsystem": "bdev", 00:17:28.345 "config": [ 00:17:28.345 { 00:17:28.345 "method": "bdev_set_options", 00:17:28.345 "params": { 00:17:28.345 "bdev_io_pool_size": 65535, 00:17:28.345 "bdev_io_cache_size": 256, 00:17:28.345 "bdev_auto_examine": true, 00:17:28.345 "iobuf_small_cache_size": 128, 00:17:28.345 "iobuf_large_cache_size": 16 00:17:28.345 } 00:17:28.345 }, 00:17:28.345 { 00:17:28.345 "method": "bdev_raid_set_options", 00:17:28.345 "params": { 00:17:28.345 "process_window_size_kb": 1024, 00:17:28.345 "process_max_bandwidth_mb_sec": 0 00:17:28.345 } 00:17:28.345 }, 00:17:28.345 { 00:17:28.345 "method": "bdev_iscsi_set_options", 00:17:28.345 "params": { 00:17:28.345 "timeout_sec": 30 00:17:28.345 } 00:17:28.345 }, 00:17:28.345 { 00:17:28.345 "method": "bdev_nvme_set_options", 00:17:28.345 "params": { 00:17:28.345 "action_on_timeout": "none", 00:17:28.345 "timeout_us": 0, 00:17:28.345 "timeout_admin_us": 0, 00:17:28.345 "keep_alive_timeout_ms": 10000, 00:17:28.345 "arbitration_burst": 0, 00:17:28.345 "low_priority_weight": 0, 00:17:28.345 "medium_priority_weight": 0, 00:17:28.345 "high_priority_weight": 0, 00:17:28.345 "nvme_adminq_poll_period_us": 10000, 00:17:28.345 "nvme_ioq_poll_period_us": 0, 00:17:28.345 "io_queue_requests": 0, 00:17:28.345 "delay_cmd_submit": true, 00:17:28.345 "transport_retry_count": 4, 00:17:28.345 "bdev_retry_count": 3, 00:17:28.345 "transport_ack_timeout": 0, 00:17:28.345 "ctrlr_loss_timeout_sec": 0, 00:17:28.345 "reconnect_delay_sec": 0, 00:17:28.345 "fast_io_fail_timeout_sec": 0, 00:17:28.345 "disable_auto_failback": false, 00:17:28.345 "generate_uuids": false, 00:17:28.345 "transport_tos": 0, 00:17:28.345 "nvme_error_stat": false, 00:17:28.345 "rdma_srq_size": 0, 00:17:28.345 "io_path_stat": false, 00:17:28.345 "allow_accel_sequence": false, 00:17:28.345 "rdma_max_cq_size": 0, 00:17:28.345 "rdma_cm_event_timeout_ms": 0, 00:17:28.345 "dhchap_digests": [ 00:17:28.345 "sha256", 00:17:28.345 "sha384", 00:17:28.345 "sha512" 00:17:28.345 ], 00:17:28.345 "dhchap_dhgroups": [ 00:17:28.345 "null", 00:17:28.345 "ffdhe2048", 00:17:28.345 "ffdhe3072", 00:17:28.345 "ffdhe4096", 00:17:28.345 "ffdhe6144", 00:17:28.345 "ffdhe8192" 00:17:28.345 ] 00:17:28.345 } 00:17:28.345 }, 00:17:28.345 { 00:17:28.345 "method": "bdev_nvme_set_hotplug", 00:17:28.345 "params": { 00:17:28.345 "period_us": 100000, 00:17:28.345 "enable": false 00:17:28.345 } 00:17:28.345 }, 00:17:28.345 { 00:17:28.345 "method": "bdev_wait_for_examine" 00:17:28.345 } 00:17:28.345 ] 00:17:28.345 }, 00:17:28.345 { 00:17:28.345 "subsystem": "scsi", 00:17:28.345 "config": null 00:17:28.345 }, 00:17:28.345 { 00:17:28.345 "subsystem": "scheduler", 00:17:28.345 "config": [ 00:17:28.345 { 00:17:28.345 "method": "framework_set_scheduler", 00:17:28.345 "params": { 00:17:28.345 "name": "static" 00:17:28.345 } 00:17:28.345 } 00:17:28.345 ] 00:17:28.345 }, 00:17:28.345 { 00:17:28.345 "subsystem": "vhost_scsi", 00:17:28.345 "config": [] 00:17:28.345 }, 00:17:28.345 { 00:17:28.345 "subsystem": "vhost_blk", 00:17:28.345 "config": [] 00:17:28.345 }, 00:17:28.345 { 00:17:28.345 "subsystem": "ublk", 00:17:28.345 "config": [] 00:17:28.345 }, 00:17:28.345 { 00:17:28.345 "subsystem": "nbd", 00:17:28.345 "config": [] 00:17:28.345 }, 00:17:28.345 { 00:17:28.345 "subsystem": "nvmf", 00:17:28.345 "config": [ 00:17:28.345 { 00:17:28.345 "method": "nvmf_set_config", 00:17:28.345 "params": { 00:17:28.345 "discovery_filter": "match_any", 00:17:28.345 "admin_cmd_passthru": { 00:17:28.345 "identify_ctrlr": false 00:17:28.345 }, 00:17:28.345 "dhchap_digests": [ 00:17:28.345 "sha256", 00:17:28.345 "sha384", 00:17:28.345 "sha512" 00:17:28.345 ], 00:17:28.345 "dhchap_dhgroups": [ 00:17:28.345 "null", 00:17:28.345 "ffdhe2048", 00:17:28.345 "ffdhe3072", 00:17:28.345 "ffdhe4096", 00:17:28.345 "ffdhe6144", 00:17:28.345 "ffdhe8192" 00:17:28.346 ] 00:17:28.346 } 00:17:28.346 }, 00:17:28.346 { 00:17:28.346 "method": "nvmf_set_max_subsystems", 00:17:28.346 "params": { 00:17:28.346 "max_subsystems": 1024 00:17:28.346 } 00:17:28.346 }, 00:17:28.346 { 00:17:28.346 "method": "nvmf_set_crdt", 00:17:28.346 "params": { 00:17:28.346 "crdt1": 0, 00:17:28.346 "crdt2": 0, 00:17:28.346 "crdt3": 0 00:17:28.346 } 00:17:28.346 }, 00:17:28.346 { 00:17:28.346 "method": "nvmf_create_transport", 00:17:28.346 "params": { 00:17:28.346 "trtype": "TCP", 00:17:28.346 "max_queue_depth": 128, 00:17:28.346 "max_io_qpairs_per_ctrlr": 127, 00:17:28.346 "in_capsule_data_size": 4096, 00:17:28.346 "max_io_size": 131072, 00:17:28.346 "io_unit_size": 131072, 00:17:28.346 "max_aq_depth": 128, 00:17:28.346 "num_shared_buffers": 511, 00:17:28.346 "buf_cache_size": 4294967295, 00:17:28.346 "dif_insert_or_strip": false, 00:17:28.346 "zcopy": false, 00:17:28.346 "c2h_success": true, 00:17:28.346 "sock_priority": 0, 00:17:28.346 "abort_timeout_sec": 1, 00:17:28.346 "ack_timeout": 0, 00:17:28.346 "data_wr_pool_size": 0 00:17:28.346 } 00:17:28.346 } 00:17:28.346 ] 00:17:28.346 }, 00:17:28.346 { 00:17:28.346 "subsystem": "iscsi", 00:17:28.346 "config": [ 00:17:28.346 { 00:17:28.346 "method": "iscsi_set_options", 00:17:28.346 "params": { 00:17:28.346 "node_base": "iqn.2016-06.io.spdk", 00:17:28.346 "max_sessions": 128, 00:17:28.346 "max_connections_per_session": 2, 00:17:28.346 "max_queue_depth": 64, 00:17:28.346 "default_time2wait": 2, 00:17:28.346 "default_time2retain": 20, 00:17:28.346 "first_burst_length": 8192, 00:17:28.346 "immediate_data": true, 00:17:28.346 "allow_duplicated_isid": false, 00:17:28.346 "error_recovery_level": 0, 00:17:28.346 "nop_timeout": 60, 00:17:28.346 "nop_in_interval": 30, 00:17:28.346 "disable_chap": false, 00:17:28.346 "require_chap": false, 00:17:28.346 "mutual_chap": false, 00:17:28.346 "chap_group": 0, 00:17:28.346 "max_large_datain_per_connection": 64, 00:17:28.346 "max_r2t_per_connection": 4, 00:17:28.346 "pdu_pool_size": 36864, 00:17:28.346 "immediate_data_pool_size": 16384, 00:17:28.346 "data_out_pool_size": 2048 00:17:28.346 } 00:17:28.346 } 00:17:28.346 ] 00:17:28.346 } 00:17:28.346 ] 00:17:28.346 } 00:17:28.346 12:49:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:28.346 12:49:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 56363 00:17:28.346 12:49:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 56363 ']' 00:17:28.346 12:49:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 56363 00:17:28.346 12:49:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:17:28.346 12:49:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:28.346 12:49:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56363 00:17:28.346 12:49:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:28.346 12:49:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:28.346 killing process with pid 56363 00:17:28.346 12:49:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56363' 00:17:28.346 12:49:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 56363 00:17:28.346 12:49:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 56363 00:17:29.728 12:49:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=56403 00:17:29.728 12:49:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:17:29.728 12:49:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:17:34.986 12:49:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 56403 00:17:34.986 12:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 56403 ']' 00:17:34.987 12:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 56403 00:17:34.987 12:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:17:34.987 12:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:34.987 12:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56403 00:17:34.987 12:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:34.987 killing process with pid 56403 00:17:34.987 12:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:34.987 12:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56403' 00:17:34.987 12:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 56403 00:17:34.987 12:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 56403 00:17:36.002 12:49:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:17:36.002 12:49:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:17:36.002 00:17:36.002 real 0m8.508s 00:17:36.002 user 0m8.128s 00:17:36.002 sys 0m0.562s 00:17:36.002 12:49:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:36.002 12:49:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:17:36.002 ************************************ 00:17:36.002 END TEST skip_rpc_with_json 00:17:36.002 ************************************ 00:17:36.002 12:49:18 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:17:36.002 12:49:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:36.002 12:49:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:36.002 12:49:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.002 ************************************ 00:17:36.002 START TEST skip_rpc_with_delay 00:17:36.002 ************************************ 00:17:36.002 12:49:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:17:36.002 12:49:18 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:17:36.002 12:49:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:17:36.002 12:49:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:17:36.002 12:49:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:36.002 12:49:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:36.002 12:49:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:36.002 12:49:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:36.002 12:49:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:36.002 12:49:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:36.002 12:49:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:36.002 12:49:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:17:36.002 12:49:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:17:36.002 [2024-12-05 12:49:18.424106] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:17:36.002 12:49:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:17:36.002 12:49:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:36.002 12:49:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:36.002 12:49:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:36.002 00:17:36.002 real 0m0.122s 00:17:36.002 user 0m0.065s 00:17:36.002 sys 0m0.056s 00:17:36.002 12:49:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:36.002 12:49:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:17:36.002 ************************************ 00:17:36.002 END TEST skip_rpc_with_delay 00:17:36.002 ************************************ 00:17:36.002 12:49:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:17:36.002 12:49:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:17:36.002 12:49:18 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:17:36.002 12:49:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:36.002 12:49:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:36.002 12:49:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.002 ************************************ 00:17:36.002 START TEST exit_on_failed_rpc_init 00:17:36.002 ************************************ 00:17:36.002 12:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:17:36.002 12:49:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=56520 00:17:36.002 12:49:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 56520 00:17:36.002 12:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 56520 ']' 00:17:36.002 12:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.002 12:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:36.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.002 12:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.002 12:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:36.002 12:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:17:36.002 12:49:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:17:36.262 [2024-12-05 12:49:18.587806] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:17:36.262 [2024-12-05 12:49:18.587939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56520 ] 00:17:36.262 [2024-12-05 12:49:18.749100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.524 [2024-12-05 12:49:18.851471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.089 12:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:37.089 12:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:17:37.089 12:49:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:17:37.089 12:49:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:17:37.089 12:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:17:37.089 12:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:17:37.089 12:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:37.089 12:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:37.089 12:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:37.089 12:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:37.089 12:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:37.089 12:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:37.089 12:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:37.089 12:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:17:37.089 12:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:17:37.089 [2024-12-05 12:49:19.521274] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:17:37.089 [2024-12-05 12:49:19.521376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56538 ] 00:17:37.346 [2024-12-05 12:49:19.674664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.346 [2024-12-05 12:49:19.788844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.346 [2024-12-05 12:49:19.788931] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:17:37.346 [2024-12-05 12:49:19.788944] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:17:37.346 [2024-12-05 12:49:19.788955] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:37.606 12:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:17:37.606 12:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:37.606 12:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:17:37.606 12:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:17:37.606 12:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:17:37.606 12:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:37.606 12:49:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:37.606 12:49:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 56520 00:17:37.606 12:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 56520 ']' 00:17:37.606 12:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 56520 00:17:37.606 12:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:17:37.606 12:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:37.606 12:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56520 00:17:37.606 12:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:37.606 12:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:37.606 killing process with pid 56520 00:17:37.606 12:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56520' 00:17:37.606 12:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 56520 00:17:37.606 12:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 56520 00:17:39.031 00:17:39.031 real 0m3.008s 00:17:39.031 user 0m3.281s 00:17:39.031 sys 0m0.404s 00:17:39.031 ************************************ 00:17:39.031 END TEST exit_on_failed_rpc_init 00:17:39.031 ************************************ 00:17:39.031 12:49:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:39.031 12:49:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:17:39.031 12:49:21 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:17:39.031 00:17:39.031 real 0m18.173s 00:17:39.031 user 0m17.467s 00:17:39.031 sys 0m1.454s 00:17:39.031 12:49:21 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:39.031 12:49:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.031 ************************************ 00:17:39.031 END TEST skip_rpc 00:17:39.031 ************************************ 00:17:39.031 12:49:21 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:17:39.031 12:49:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:39.031 12:49:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:39.031 12:49:21 -- common/autotest_common.sh@10 -- # set +x 00:17:39.031 ************************************ 00:17:39.031 START TEST rpc_client 00:17:39.031 ************************************ 00:17:39.031 12:49:21 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:17:39.290 * Looking for test storage... 00:17:39.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:17:39.290 12:49:21 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:39.290 12:49:21 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:39.290 12:49:21 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:17:39.290 12:49:21 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:39.290 12:49:21 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:39.290 12:49:21 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:39.290 12:49:21 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:39.290 12:49:21 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:17:39.290 12:49:21 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:17:39.290 12:49:21 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:17:39.290 12:49:21 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:17:39.290 12:49:21 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:17:39.290 12:49:21 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:17:39.290 12:49:21 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:17:39.290 12:49:21 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:39.290 12:49:21 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:17:39.290 12:49:21 rpc_client -- scripts/common.sh@345 -- # : 1 00:17:39.290 12:49:21 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:39.290 12:49:21 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:39.290 12:49:21 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:17:39.290 12:49:21 rpc_client -- scripts/common.sh@353 -- # local d=1 00:17:39.290 12:49:21 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:39.290 12:49:21 rpc_client -- scripts/common.sh@355 -- # echo 1 00:17:39.290 12:49:21 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:17:39.290 12:49:21 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:17:39.290 12:49:21 rpc_client -- scripts/common.sh@353 -- # local d=2 00:17:39.290 12:49:21 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:39.290 12:49:21 rpc_client -- scripts/common.sh@355 -- # echo 2 00:17:39.290 12:49:21 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:17:39.290 12:49:21 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:39.290 12:49:21 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:39.290 12:49:21 rpc_client -- scripts/common.sh@368 -- # return 0 00:17:39.290 12:49:21 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:39.290 12:49:21 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:39.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.290 --rc genhtml_branch_coverage=1 00:17:39.290 --rc genhtml_function_coverage=1 00:17:39.290 --rc genhtml_legend=1 00:17:39.290 --rc geninfo_all_blocks=1 00:17:39.290 --rc geninfo_unexecuted_blocks=1 00:17:39.290 00:17:39.290 ' 00:17:39.290 12:49:21 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:39.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.290 --rc genhtml_branch_coverage=1 00:17:39.290 --rc genhtml_function_coverage=1 00:17:39.290 --rc genhtml_legend=1 00:17:39.290 --rc geninfo_all_blocks=1 00:17:39.290 --rc geninfo_unexecuted_blocks=1 00:17:39.290 00:17:39.290 ' 00:17:39.290 12:49:21 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:39.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.290 --rc genhtml_branch_coverage=1 00:17:39.290 --rc genhtml_function_coverage=1 00:17:39.290 --rc genhtml_legend=1 00:17:39.290 --rc geninfo_all_blocks=1 00:17:39.290 --rc geninfo_unexecuted_blocks=1 00:17:39.290 00:17:39.290 ' 00:17:39.290 12:49:21 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:39.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.290 --rc genhtml_branch_coverage=1 00:17:39.290 --rc genhtml_function_coverage=1 00:17:39.290 --rc genhtml_legend=1 00:17:39.290 --rc geninfo_all_blocks=1 00:17:39.290 --rc geninfo_unexecuted_blocks=1 00:17:39.290 00:17:39.290 ' 00:17:39.290 12:49:21 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:17:39.290 OK 00:17:39.290 12:49:21 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:17:39.290 00:17:39.290 real 0m0.170s 00:17:39.290 user 0m0.099s 00:17:39.290 sys 0m0.077s 00:17:39.290 12:49:21 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:39.290 12:49:21 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:17:39.290 ************************************ 00:17:39.290 END TEST rpc_client 00:17:39.290 ************************************ 00:17:39.290 12:49:21 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:17:39.290 12:49:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:39.290 12:49:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:39.290 12:49:21 -- common/autotest_common.sh@10 -- # set +x 00:17:39.290 ************************************ 00:17:39.290 START TEST json_config 00:17:39.290 ************************************ 00:17:39.290 12:49:21 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:17:39.290 12:49:21 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:39.290 12:49:21 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:17:39.290 12:49:21 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:39.550 12:49:21 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:39.550 12:49:21 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:39.550 12:49:21 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:39.550 12:49:21 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:39.550 12:49:21 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:17:39.550 12:49:21 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:17:39.550 12:49:21 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:17:39.550 12:49:21 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:17:39.550 12:49:21 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:17:39.550 12:49:21 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:17:39.550 12:49:21 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:17:39.550 12:49:21 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:39.550 12:49:21 json_config -- scripts/common.sh@344 -- # case "$op" in 00:17:39.550 12:49:21 json_config -- scripts/common.sh@345 -- # : 1 00:17:39.550 12:49:21 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:39.550 12:49:21 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:39.550 12:49:21 json_config -- scripts/common.sh@365 -- # decimal 1 00:17:39.550 12:49:21 json_config -- scripts/common.sh@353 -- # local d=1 00:17:39.550 12:49:21 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:39.550 12:49:21 json_config -- scripts/common.sh@355 -- # echo 1 00:17:39.550 12:49:21 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:17:39.550 12:49:21 json_config -- scripts/common.sh@366 -- # decimal 2 00:17:39.550 12:49:21 json_config -- scripts/common.sh@353 -- # local d=2 00:17:39.550 12:49:21 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:39.550 12:49:21 json_config -- scripts/common.sh@355 -- # echo 2 00:17:39.550 12:49:21 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:17:39.550 12:49:21 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:39.550 12:49:21 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:39.550 12:49:21 json_config -- scripts/common.sh@368 -- # return 0 00:17:39.550 12:49:21 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:39.550 12:49:21 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:39.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.550 --rc genhtml_branch_coverage=1 00:17:39.550 --rc genhtml_function_coverage=1 00:17:39.550 --rc genhtml_legend=1 00:17:39.550 --rc geninfo_all_blocks=1 00:17:39.550 --rc geninfo_unexecuted_blocks=1 00:17:39.550 00:17:39.550 ' 00:17:39.550 12:49:21 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:39.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.550 --rc genhtml_branch_coverage=1 00:17:39.550 --rc genhtml_function_coverage=1 00:17:39.550 --rc genhtml_legend=1 00:17:39.550 --rc geninfo_all_blocks=1 00:17:39.550 --rc geninfo_unexecuted_blocks=1 00:17:39.550 00:17:39.550 ' 00:17:39.550 12:49:21 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:39.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.550 --rc genhtml_branch_coverage=1 00:17:39.550 --rc genhtml_function_coverage=1 00:17:39.550 --rc genhtml_legend=1 00:17:39.550 --rc geninfo_all_blocks=1 00:17:39.550 --rc geninfo_unexecuted_blocks=1 00:17:39.550 00:17:39.550 ' 00:17:39.550 12:49:21 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:39.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.550 --rc genhtml_branch_coverage=1 00:17:39.550 --rc genhtml_function_coverage=1 00:17:39.550 --rc genhtml_legend=1 00:17:39.550 --rc geninfo_all_blocks=1 00:17:39.550 --rc geninfo_unexecuted_blocks=1 00:17:39.550 00:17:39.550 ' 00:17:39.550 12:49:21 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:39.550 12:49:21 json_config -- nvmf/common.sh@7 -- # uname -s 00:17:39.550 12:49:21 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:39.550 12:49:21 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:39.550 12:49:21 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:39.550 12:49:21 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:39.550 12:49:21 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:39.550 12:49:21 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:39.550 12:49:21 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:39.550 12:49:21 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:39.550 12:49:21 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:39.550 12:49:21 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:39.550 12:49:21 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ea58e83f-bd42-45fc-a617-d0e3b2b9b56b 00:17:39.550 12:49:21 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=ea58e83f-bd42-45fc-a617-d0e3b2b9b56b 00:17:39.550 12:49:21 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:39.550 12:49:21 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:39.550 12:49:21 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:39.550 12:49:21 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:39.550 12:49:21 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:39.550 12:49:21 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:17:39.550 12:49:21 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:39.550 12:49:21 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:39.550 12:49:21 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:39.550 12:49:21 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.550 12:49:21 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.550 12:49:21 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.550 12:49:21 json_config -- paths/export.sh@5 -- # export PATH 00:17:39.550 12:49:21 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.550 12:49:21 json_config -- nvmf/common.sh@51 -- # : 0 00:17:39.550 12:49:21 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:39.550 12:49:21 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:39.551 12:49:21 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:39.551 12:49:21 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:39.551 12:49:21 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:39.551 12:49:21 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:39.551 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:39.551 12:49:21 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:39.551 12:49:21 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:39.551 12:49:21 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:39.551 12:49:21 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:17:39.551 12:49:21 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:17:39.551 12:49:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:17:39.551 12:49:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:17:39.551 12:49:21 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:17:39.551 WARNING: No tests are enabled so not running JSON configuration tests 00:17:39.551 12:49:21 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:17:39.551 12:49:21 json_config -- json_config/json_config.sh@28 -- # exit 0 00:17:39.551 00:17:39.551 real 0m0.139s 00:17:39.551 user 0m0.091s 00:17:39.551 sys 0m0.053s 00:17:39.551 12:49:21 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:39.551 12:49:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:39.551 ************************************ 00:17:39.551 END TEST json_config 00:17:39.551 ************************************ 00:17:39.551 12:49:21 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:17:39.551 12:49:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:39.551 12:49:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:39.551 12:49:21 -- common/autotest_common.sh@10 -- # set +x 00:17:39.551 ************************************ 00:17:39.551 START TEST json_config_extra_key 00:17:39.551 ************************************ 00:17:39.551 12:49:21 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:17:39.551 12:49:22 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:39.551 12:49:22 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:17:39.551 12:49:22 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:39.551 12:49:22 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:39.551 12:49:22 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:39.551 12:49:22 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:39.551 12:49:22 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:39.551 12:49:22 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:17:39.551 12:49:22 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:17:39.551 12:49:22 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:17:39.551 12:49:22 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:17:39.551 12:49:22 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:17:39.551 12:49:22 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:17:39.551 12:49:22 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:17:39.551 12:49:22 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:39.551 12:49:22 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:17:39.551 12:49:22 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:17:39.551 12:49:22 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:39.551 12:49:22 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:39.551 12:49:22 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:17:39.551 12:49:22 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:17:39.551 12:49:22 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:39.551 12:49:22 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:17:39.551 12:49:22 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:17:39.551 12:49:22 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:17:39.551 12:49:22 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:17:39.551 12:49:22 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:39.551 12:49:22 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:17:39.551 12:49:22 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:17:39.551 12:49:22 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:39.551 12:49:22 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:39.551 12:49:22 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:17:39.551 12:49:22 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:39.551 12:49:22 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:39.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.551 --rc genhtml_branch_coverage=1 00:17:39.551 --rc genhtml_function_coverage=1 00:17:39.551 --rc genhtml_legend=1 00:17:39.551 --rc geninfo_all_blocks=1 00:17:39.551 --rc geninfo_unexecuted_blocks=1 00:17:39.551 00:17:39.551 ' 00:17:39.551 12:49:22 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:39.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.551 --rc genhtml_branch_coverage=1 00:17:39.551 --rc genhtml_function_coverage=1 00:17:39.551 --rc genhtml_legend=1 00:17:39.551 --rc geninfo_all_blocks=1 00:17:39.551 --rc geninfo_unexecuted_blocks=1 00:17:39.551 00:17:39.551 ' 00:17:39.551 12:49:22 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:39.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.551 --rc genhtml_branch_coverage=1 00:17:39.551 --rc genhtml_function_coverage=1 00:17:39.551 --rc genhtml_legend=1 00:17:39.551 --rc geninfo_all_blocks=1 00:17:39.551 --rc geninfo_unexecuted_blocks=1 00:17:39.551 00:17:39.551 ' 00:17:39.551 12:49:22 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:39.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.551 --rc genhtml_branch_coverage=1 00:17:39.551 --rc genhtml_function_coverage=1 00:17:39.551 --rc genhtml_legend=1 00:17:39.551 --rc geninfo_all_blocks=1 00:17:39.551 --rc geninfo_unexecuted_blocks=1 00:17:39.551 00:17:39.551 ' 00:17:39.551 12:49:22 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:39.551 12:49:22 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:17:39.551 12:49:22 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:39.551 12:49:22 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:39.551 12:49:22 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:39.551 12:49:22 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:39.551 12:49:22 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:39.551 12:49:22 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:39.551 12:49:22 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:39.551 12:49:22 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:39.551 12:49:22 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:39.551 12:49:22 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:39.551 12:49:22 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ea58e83f-bd42-45fc-a617-d0e3b2b9b56b 00:17:39.551 12:49:22 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=ea58e83f-bd42-45fc-a617-d0e3b2b9b56b 00:17:39.551 12:49:22 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:39.551 12:49:22 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:39.551 12:49:22 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:39.551 12:49:22 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:39.551 12:49:22 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:39.551 12:49:22 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:17:39.551 12:49:22 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:39.551 12:49:22 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:39.551 12:49:22 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:39.551 12:49:22 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.551 12:49:22 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.551 12:49:22 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.551 12:49:22 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:17:39.551 12:49:22 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.551 12:49:22 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:17:39.551 12:49:22 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:39.551 12:49:22 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:39.552 12:49:22 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:39.552 12:49:22 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:39.552 12:49:22 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:39.552 12:49:22 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:39.552 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:39.552 12:49:22 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:39.552 12:49:22 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:39.552 12:49:22 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:39.552 12:49:22 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:17:39.552 12:49:22 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:17:39.552 12:49:22 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:17:39.552 12:49:22 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:17:39.552 12:49:22 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:17:39.552 12:49:22 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:17:39.552 12:49:22 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:17:39.552 12:49:22 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:17:39.552 12:49:22 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:17:39.552 12:49:22 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:17:39.552 INFO: launching applications... 00:17:39.552 12:49:22 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:17:39.552 12:49:22 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:17:39.552 12:49:22 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:17:39.552 12:49:22 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:17:39.552 12:49:22 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:17:39.552 12:49:22 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:17:39.552 12:49:22 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:17:39.552 12:49:22 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:17:39.552 12:49:22 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:17:39.552 Waiting for target to run... 00:17:39.552 12:49:22 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=56731 00:17:39.552 12:49:22 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:17:39.552 12:49:22 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 56731 /var/tmp/spdk_tgt.sock 00:17:39.552 12:49:22 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 56731 ']' 00:17:39.552 12:49:22 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:17:39.552 12:49:22 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:17:39.552 12:49:22 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:39.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:17:39.552 12:49:22 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:17:39.552 12:49:22 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:39.552 12:49:22 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:17:39.810 [2024-12-05 12:49:22.188048] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:17:39.810 [2024-12-05 12:49:22.188176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56731 ] 00:17:40.069 [2024-12-05 12:49:22.511159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.069 [2024-12-05 12:49:22.605370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.635 12:49:23 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:40.635 12:49:23 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:17:40.635 00:17:40.635 12:49:23 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:17:40.635 INFO: shutting down applications... 00:17:40.635 12:49:23 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:17:40.635 12:49:23 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:17:40.635 12:49:23 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:17:40.635 12:49:23 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:17:40.635 12:49:23 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 56731 ]] 00:17:40.635 12:49:23 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 56731 00:17:40.635 12:49:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:17:40.635 12:49:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:40.635 12:49:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56731 00:17:40.635 12:49:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:17:41.201 12:49:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:17:41.201 12:49:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:41.201 12:49:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56731 00:17:41.201 12:49:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:17:41.823 12:49:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:17:41.823 12:49:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:41.823 12:49:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56731 00:17:41.823 12:49:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:17:42.084 12:49:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:17:42.084 12:49:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:42.084 12:49:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56731 00:17:42.084 12:49:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:17:42.652 12:49:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:17:42.652 12:49:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:42.652 12:49:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 56731 00:17:42.652 12:49:25 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:17:42.652 12:49:25 json_config_extra_key -- json_config/common.sh@43 -- # break 00:17:42.652 12:49:25 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:17:42.652 SPDK target shutdown done 00:17:42.652 12:49:25 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:17:42.652 Success 00:17:42.652 12:49:25 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:17:42.652 00:17:42.652 real 0m3.159s 00:17:42.652 user 0m2.783s 00:17:42.652 sys 0m0.416s 00:17:42.652 12:49:25 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:42.652 12:49:25 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:17:42.652 ************************************ 00:17:42.652 END TEST json_config_extra_key 00:17:42.652 ************************************ 00:17:42.652 12:49:25 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:17:42.652 12:49:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:42.652 12:49:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:42.652 12:49:25 -- common/autotest_common.sh@10 -- # set +x 00:17:42.652 ************************************ 00:17:42.652 START TEST alias_rpc 00:17:42.652 ************************************ 00:17:42.652 12:49:25 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:17:42.652 * Looking for test storage... 00:17:42.652 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:17:42.652 12:49:25 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:42.652 12:49:25 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:17:42.652 12:49:25 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:42.909 12:49:25 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:42.909 12:49:25 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:42.909 12:49:25 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:42.909 12:49:25 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:42.909 12:49:25 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:42.909 12:49:25 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:42.909 12:49:25 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:42.910 12:49:25 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:42.910 12:49:25 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:42.910 12:49:25 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:42.910 12:49:25 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:42.910 12:49:25 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:42.910 12:49:25 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:42.910 12:49:25 alias_rpc -- scripts/common.sh@345 -- # : 1 00:17:42.910 12:49:25 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:42.910 12:49:25 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:42.910 12:49:25 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:42.910 12:49:25 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:17:42.910 12:49:25 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:42.910 12:49:25 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:17:42.910 12:49:25 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:42.910 12:49:25 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:42.910 12:49:25 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:17:42.910 12:49:25 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:42.910 12:49:25 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:17:42.910 12:49:25 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:42.910 12:49:25 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:42.910 12:49:25 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:42.910 12:49:25 alias_rpc -- scripts/common.sh@368 -- # return 0 00:17:42.910 12:49:25 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:42.910 12:49:25 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:42.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.910 --rc genhtml_branch_coverage=1 00:17:42.910 --rc genhtml_function_coverage=1 00:17:42.910 --rc genhtml_legend=1 00:17:42.910 --rc geninfo_all_blocks=1 00:17:42.910 --rc geninfo_unexecuted_blocks=1 00:17:42.910 00:17:42.910 ' 00:17:42.910 12:49:25 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:42.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.910 --rc genhtml_branch_coverage=1 00:17:42.910 --rc genhtml_function_coverage=1 00:17:42.910 --rc genhtml_legend=1 00:17:42.910 --rc geninfo_all_blocks=1 00:17:42.910 --rc geninfo_unexecuted_blocks=1 00:17:42.910 00:17:42.910 ' 00:17:42.910 12:49:25 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:42.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.910 --rc genhtml_branch_coverage=1 00:17:42.910 --rc genhtml_function_coverage=1 00:17:42.910 --rc genhtml_legend=1 00:17:42.910 --rc geninfo_all_blocks=1 00:17:42.910 --rc geninfo_unexecuted_blocks=1 00:17:42.910 00:17:42.910 ' 00:17:42.910 12:49:25 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:42.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.910 --rc genhtml_branch_coverage=1 00:17:42.910 --rc genhtml_function_coverage=1 00:17:42.910 --rc genhtml_legend=1 00:17:42.910 --rc geninfo_all_blocks=1 00:17:42.910 --rc geninfo_unexecuted_blocks=1 00:17:42.910 00:17:42.910 ' 00:17:42.910 12:49:25 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:17:42.910 12:49:25 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=56830 00:17:42.910 12:49:25 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 56830 00:17:42.910 12:49:25 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:42.910 12:49:25 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 56830 ']' 00:17:42.910 12:49:25 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.910 12:49:25 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:42.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.910 12:49:25 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.910 12:49:25 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:42.910 12:49:25 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.910 [2024-12-05 12:49:25.372466] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:17:42.910 [2024-12-05 12:49:25.372608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56830 ] 00:17:43.167 [2024-12-05 12:49:25.534139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.167 [2024-12-05 12:49:25.636465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.732 12:49:26 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:43.732 12:49:26 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:43.732 12:49:26 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:17:43.990 12:49:26 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 56830 00:17:43.990 12:49:26 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 56830 ']' 00:17:43.990 12:49:26 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 56830 00:17:43.990 12:49:26 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:17:43.990 12:49:26 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:43.990 12:49:26 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56830 00:17:43.990 12:49:26 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:43.990 12:49:26 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:43.990 killing process with pid 56830 00:17:43.990 12:49:26 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56830' 00:17:43.990 12:49:26 alias_rpc -- common/autotest_common.sh@973 -- # kill 56830 00:17:43.990 12:49:26 alias_rpc -- common/autotest_common.sh@978 -- # wait 56830 00:17:45.982 00:17:45.982 real 0m2.935s 00:17:45.982 user 0m3.103s 00:17:45.982 sys 0m0.435s 00:17:45.982 12:49:28 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:45.982 12:49:28 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.982 ************************************ 00:17:45.982 END TEST alias_rpc 00:17:45.982 ************************************ 00:17:45.982 12:49:28 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:17:45.982 12:49:28 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:17:45.982 12:49:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:45.982 12:49:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:45.982 12:49:28 -- common/autotest_common.sh@10 -- # set +x 00:17:45.982 ************************************ 00:17:45.982 START TEST spdkcli_tcp 00:17:45.982 ************************************ 00:17:45.982 12:49:28 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:17:45.982 * Looking for test storage... 00:17:45.982 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:45.982 12:49:28 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:45.982 12:49:28 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:17:45.982 12:49:28 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:45.982 12:49:28 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:45.982 12:49:28 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:45.982 12:49:28 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:45.982 12:49:28 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:45.982 12:49:28 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:17:45.982 12:49:28 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:17:45.982 12:49:28 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:17:45.982 12:49:28 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:17:45.982 12:49:28 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:17:45.982 12:49:28 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:17:45.982 12:49:28 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:17:45.982 12:49:28 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:45.982 12:49:28 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:17:45.982 12:49:28 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:17:45.982 12:49:28 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:45.982 12:49:28 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:45.983 12:49:28 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:17:45.983 12:49:28 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:17:45.983 12:49:28 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:45.983 12:49:28 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:17:45.983 12:49:28 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:17:45.983 12:49:28 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:17:45.983 12:49:28 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:17:45.983 12:49:28 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:45.983 12:49:28 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:17:45.983 12:49:28 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:17:45.983 12:49:28 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:45.983 12:49:28 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:45.983 12:49:28 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:17:45.983 12:49:28 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:45.983 12:49:28 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:45.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.983 --rc genhtml_branch_coverage=1 00:17:45.983 --rc genhtml_function_coverage=1 00:17:45.983 --rc genhtml_legend=1 00:17:45.983 --rc geninfo_all_blocks=1 00:17:45.983 --rc geninfo_unexecuted_blocks=1 00:17:45.983 00:17:45.983 ' 00:17:45.983 12:49:28 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:45.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.983 --rc genhtml_branch_coverage=1 00:17:45.983 --rc genhtml_function_coverage=1 00:17:45.983 --rc genhtml_legend=1 00:17:45.983 --rc geninfo_all_blocks=1 00:17:45.983 --rc geninfo_unexecuted_blocks=1 00:17:45.983 00:17:45.983 ' 00:17:45.983 12:49:28 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:45.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.983 --rc genhtml_branch_coverage=1 00:17:45.983 --rc genhtml_function_coverage=1 00:17:45.983 --rc genhtml_legend=1 00:17:45.983 --rc geninfo_all_blocks=1 00:17:45.983 --rc geninfo_unexecuted_blocks=1 00:17:45.983 00:17:45.983 ' 00:17:45.983 12:49:28 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:45.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.983 --rc genhtml_branch_coverage=1 00:17:45.983 --rc genhtml_function_coverage=1 00:17:45.983 --rc genhtml_legend=1 00:17:45.983 --rc geninfo_all_blocks=1 00:17:45.983 --rc geninfo_unexecuted_blocks=1 00:17:45.983 00:17:45.983 ' 00:17:45.983 12:49:28 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:17:45.983 12:49:28 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:17:45.983 12:49:28 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:17:45.983 12:49:28 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:17:45.983 12:49:28 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:17:45.983 12:49:28 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:45.983 12:49:28 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:17:45.983 12:49:28 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:45.983 12:49:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:45.983 12:49:28 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=56921 00:17:45.983 12:49:28 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 56921 00:17:45.983 12:49:28 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 56921 ']' 00:17:45.983 12:49:28 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.983 12:49:28 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:45.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.983 12:49:28 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.983 12:49:28 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:17:45.983 12:49:28 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:45.983 12:49:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:45.983 [2024-12-05 12:49:28.329351] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:17:45.983 [2024-12-05 12:49:28.329452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56921 ] 00:17:45.983 [2024-12-05 12:49:28.485409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:46.241 [2024-12-05 12:49:28.588059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.241 [2024-12-05 12:49:28.588173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.807 12:49:29 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:46.807 12:49:29 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:17:46.807 12:49:29 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=56937 00:17:46.807 12:49:29 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:17:46.807 12:49:29 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:17:46.807 [ 00:17:46.807 "bdev_malloc_delete", 00:17:46.807 "bdev_malloc_create", 00:17:46.807 "bdev_null_resize", 00:17:46.807 "bdev_null_delete", 00:17:46.807 "bdev_null_create", 00:17:46.807 "bdev_nvme_cuse_unregister", 00:17:46.807 "bdev_nvme_cuse_register", 00:17:46.807 "bdev_opal_new_user", 00:17:46.807 "bdev_opal_set_lock_state", 00:17:46.807 "bdev_opal_delete", 00:17:46.807 "bdev_opal_get_info", 00:17:46.807 "bdev_opal_create", 00:17:46.807 "bdev_nvme_opal_revert", 00:17:46.807 "bdev_nvme_opal_init", 00:17:46.807 "bdev_nvme_send_cmd", 00:17:46.807 "bdev_nvme_set_keys", 00:17:46.807 "bdev_nvme_get_path_iostat", 00:17:46.807 "bdev_nvme_get_mdns_discovery_info", 00:17:46.807 "bdev_nvme_stop_mdns_discovery", 00:17:46.807 "bdev_nvme_start_mdns_discovery", 00:17:46.807 "bdev_nvme_set_multipath_policy", 00:17:46.807 "bdev_nvme_set_preferred_path", 00:17:46.807 "bdev_nvme_get_io_paths", 00:17:46.807 "bdev_nvme_remove_error_injection", 00:17:46.807 "bdev_nvme_add_error_injection", 00:17:46.807 "bdev_nvme_get_discovery_info", 00:17:46.807 "bdev_nvme_stop_discovery", 00:17:46.807 "bdev_nvme_start_discovery", 00:17:46.807 "bdev_nvme_get_controller_health_info", 00:17:46.807 "bdev_nvme_disable_controller", 00:17:46.807 "bdev_nvme_enable_controller", 00:17:46.807 "bdev_nvme_reset_controller", 00:17:46.807 "bdev_nvme_get_transport_statistics", 00:17:46.807 "bdev_nvme_apply_firmware", 00:17:46.807 "bdev_nvme_detach_controller", 00:17:46.807 "bdev_nvme_get_controllers", 00:17:46.807 "bdev_nvme_attach_controller", 00:17:46.807 "bdev_nvme_set_hotplug", 00:17:46.807 "bdev_nvme_set_options", 00:17:46.807 "bdev_passthru_delete", 00:17:46.807 "bdev_passthru_create", 00:17:46.807 "bdev_lvol_set_parent_bdev", 00:17:46.807 "bdev_lvol_set_parent", 00:17:46.807 "bdev_lvol_check_shallow_copy", 00:17:46.807 "bdev_lvol_start_shallow_copy", 00:17:46.807 "bdev_lvol_grow_lvstore", 00:17:46.807 "bdev_lvol_get_lvols", 00:17:46.808 "bdev_lvol_get_lvstores", 00:17:46.808 "bdev_lvol_delete", 00:17:46.808 "bdev_lvol_set_read_only", 00:17:46.808 "bdev_lvol_resize", 00:17:46.808 "bdev_lvol_decouple_parent", 00:17:46.808 "bdev_lvol_inflate", 00:17:46.808 "bdev_lvol_rename", 00:17:46.808 "bdev_lvol_clone_bdev", 00:17:46.808 "bdev_lvol_clone", 00:17:46.808 "bdev_lvol_snapshot", 00:17:46.808 "bdev_lvol_create", 00:17:46.808 "bdev_lvol_delete_lvstore", 00:17:46.808 "bdev_lvol_rename_lvstore", 00:17:46.808 "bdev_lvol_create_lvstore", 00:17:46.808 "bdev_raid_set_options", 00:17:46.808 "bdev_raid_remove_base_bdev", 00:17:46.808 "bdev_raid_add_base_bdev", 00:17:46.808 "bdev_raid_delete", 00:17:46.808 "bdev_raid_create", 00:17:46.808 "bdev_raid_get_bdevs", 00:17:46.808 "bdev_error_inject_error", 00:17:46.808 "bdev_error_delete", 00:17:46.808 "bdev_error_create", 00:17:46.808 "bdev_split_delete", 00:17:46.808 "bdev_split_create", 00:17:46.808 "bdev_delay_delete", 00:17:46.808 "bdev_delay_create", 00:17:46.808 "bdev_delay_update_latency", 00:17:46.808 "bdev_zone_block_delete", 00:17:46.808 "bdev_zone_block_create", 00:17:46.808 "blobfs_create", 00:17:46.808 "blobfs_detect", 00:17:46.808 "blobfs_set_cache_size", 00:17:46.808 "bdev_aio_delete", 00:17:46.808 "bdev_aio_rescan", 00:17:46.808 "bdev_aio_create", 00:17:46.808 "bdev_ftl_set_property", 00:17:46.808 "bdev_ftl_get_properties", 00:17:46.808 "bdev_ftl_get_stats", 00:17:46.808 "bdev_ftl_unmap", 00:17:46.808 "bdev_ftl_unload", 00:17:46.808 "bdev_ftl_delete", 00:17:46.808 "bdev_ftl_load", 00:17:46.808 "bdev_ftl_create", 00:17:46.808 "bdev_virtio_attach_controller", 00:17:46.808 "bdev_virtio_scsi_get_devices", 00:17:46.808 "bdev_virtio_detach_controller", 00:17:46.808 "bdev_virtio_blk_set_hotplug", 00:17:46.808 "bdev_iscsi_delete", 00:17:46.808 "bdev_iscsi_create", 00:17:46.808 "bdev_iscsi_set_options", 00:17:46.808 "accel_error_inject_error", 00:17:46.808 "ioat_scan_accel_module", 00:17:46.808 "dsa_scan_accel_module", 00:17:46.808 "iaa_scan_accel_module", 00:17:46.808 "keyring_file_remove_key", 00:17:46.808 "keyring_file_add_key", 00:17:46.808 "keyring_linux_set_options", 00:17:46.808 "fsdev_aio_delete", 00:17:46.808 "fsdev_aio_create", 00:17:46.808 "iscsi_get_histogram", 00:17:46.808 "iscsi_enable_histogram", 00:17:46.808 "iscsi_set_options", 00:17:46.808 "iscsi_get_auth_groups", 00:17:46.808 "iscsi_auth_group_remove_secret", 00:17:46.808 "iscsi_auth_group_add_secret", 00:17:46.808 "iscsi_delete_auth_group", 00:17:46.808 "iscsi_create_auth_group", 00:17:46.808 "iscsi_set_discovery_auth", 00:17:46.808 "iscsi_get_options", 00:17:46.808 "iscsi_target_node_request_logout", 00:17:46.808 "iscsi_target_node_set_redirect", 00:17:46.808 "iscsi_target_node_set_auth", 00:17:46.808 "iscsi_target_node_add_lun", 00:17:46.808 "iscsi_get_stats", 00:17:46.808 "iscsi_get_connections", 00:17:46.808 "iscsi_portal_group_set_auth", 00:17:46.808 "iscsi_start_portal_group", 00:17:46.808 "iscsi_delete_portal_group", 00:17:46.808 "iscsi_create_portal_group", 00:17:46.808 "iscsi_get_portal_groups", 00:17:46.808 "iscsi_delete_target_node", 00:17:46.808 "iscsi_target_node_remove_pg_ig_maps", 00:17:46.808 "iscsi_target_node_add_pg_ig_maps", 00:17:46.808 "iscsi_create_target_node", 00:17:46.808 "iscsi_get_target_nodes", 00:17:46.808 "iscsi_delete_initiator_group", 00:17:46.808 "iscsi_initiator_group_remove_initiators", 00:17:46.808 "iscsi_initiator_group_add_initiators", 00:17:46.808 "iscsi_create_initiator_group", 00:17:46.808 "iscsi_get_initiator_groups", 00:17:46.808 "nvmf_set_crdt", 00:17:46.808 "nvmf_set_config", 00:17:46.808 "nvmf_set_max_subsystems", 00:17:46.808 "nvmf_stop_mdns_prr", 00:17:46.808 "nvmf_publish_mdns_prr", 00:17:46.808 "nvmf_subsystem_get_listeners", 00:17:46.808 "nvmf_subsystem_get_qpairs", 00:17:46.808 "nvmf_subsystem_get_controllers", 00:17:46.808 "nvmf_get_stats", 00:17:46.808 "nvmf_get_transports", 00:17:46.808 "nvmf_create_transport", 00:17:46.808 "nvmf_get_targets", 00:17:46.808 "nvmf_delete_target", 00:17:46.808 "nvmf_create_target", 00:17:46.808 "nvmf_subsystem_allow_any_host", 00:17:46.808 "nvmf_subsystem_set_keys", 00:17:46.808 "nvmf_subsystem_remove_host", 00:17:46.808 "nvmf_subsystem_add_host", 00:17:46.808 "nvmf_ns_remove_host", 00:17:46.808 "nvmf_ns_add_host", 00:17:46.808 "nvmf_subsystem_remove_ns", 00:17:46.808 "nvmf_subsystem_set_ns_ana_group", 00:17:46.808 "nvmf_subsystem_add_ns", 00:17:46.808 "nvmf_subsystem_listener_set_ana_state", 00:17:46.808 "nvmf_discovery_get_referrals", 00:17:46.808 "nvmf_discovery_remove_referral", 00:17:46.808 "nvmf_discovery_add_referral", 00:17:46.808 "nvmf_subsystem_remove_listener", 00:17:46.808 "nvmf_subsystem_add_listener", 00:17:46.808 "nvmf_delete_subsystem", 00:17:46.808 "nvmf_create_subsystem", 00:17:46.808 "nvmf_get_subsystems", 00:17:46.808 "env_dpdk_get_mem_stats", 00:17:46.808 "nbd_get_disks", 00:17:46.808 "nbd_stop_disk", 00:17:46.808 "nbd_start_disk", 00:17:46.808 "ublk_recover_disk", 00:17:46.808 "ublk_get_disks", 00:17:46.808 "ublk_stop_disk", 00:17:46.808 "ublk_start_disk", 00:17:46.808 "ublk_destroy_target", 00:17:46.808 "ublk_create_target", 00:17:46.808 "virtio_blk_create_transport", 00:17:46.808 "virtio_blk_get_transports", 00:17:46.808 "vhost_controller_set_coalescing", 00:17:46.808 "vhost_get_controllers", 00:17:46.808 "vhost_delete_controller", 00:17:46.808 "vhost_create_blk_controller", 00:17:46.808 "vhost_scsi_controller_remove_target", 00:17:46.808 "vhost_scsi_controller_add_target", 00:17:46.808 "vhost_start_scsi_controller", 00:17:46.808 "vhost_create_scsi_controller", 00:17:46.808 "thread_set_cpumask", 00:17:46.808 "scheduler_set_options", 00:17:46.808 "framework_get_governor", 00:17:46.808 "framework_get_scheduler", 00:17:46.808 "framework_set_scheduler", 00:17:46.808 "framework_get_reactors", 00:17:46.808 "thread_get_io_channels", 00:17:46.808 "thread_get_pollers", 00:17:46.808 "thread_get_stats", 00:17:46.808 "framework_monitor_context_switch", 00:17:46.808 "spdk_kill_instance", 00:17:46.808 "log_enable_timestamps", 00:17:46.808 "log_get_flags", 00:17:46.808 "log_clear_flag", 00:17:46.808 "log_set_flag", 00:17:46.808 "log_get_level", 00:17:46.808 "log_set_level", 00:17:46.808 "log_get_print_level", 00:17:46.808 "log_set_print_level", 00:17:46.808 "framework_enable_cpumask_locks", 00:17:46.808 "framework_disable_cpumask_locks", 00:17:46.808 "framework_wait_init", 00:17:46.808 "framework_start_init", 00:17:46.808 "scsi_get_devices", 00:17:46.808 "bdev_get_histogram", 00:17:46.808 "bdev_enable_histogram", 00:17:46.808 "bdev_set_qos_limit", 00:17:46.808 "bdev_set_qd_sampling_period", 00:17:46.808 "bdev_get_bdevs", 00:17:46.808 "bdev_reset_iostat", 00:17:46.808 "bdev_get_iostat", 00:17:46.808 "bdev_examine", 00:17:46.808 "bdev_wait_for_examine", 00:17:46.808 "bdev_set_options", 00:17:46.808 "accel_get_stats", 00:17:46.808 "accel_set_options", 00:17:46.808 "accel_set_driver", 00:17:46.808 "accel_crypto_key_destroy", 00:17:46.808 "accel_crypto_keys_get", 00:17:46.808 "accel_crypto_key_create", 00:17:46.808 "accel_assign_opc", 00:17:46.808 "accel_get_module_info", 00:17:46.808 "accel_get_opc_assignments", 00:17:46.808 "vmd_rescan", 00:17:46.808 "vmd_remove_device", 00:17:46.808 "vmd_enable", 00:17:46.808 "sock_get_default_impl", 00:17:46.808 "sock_set_default_impl", 00:17:46.808 "sock_impl_set_options", 00:17:46.808 "sock_impl_get_options", 00:17:46.808 "iobuf_get_stats", 00:17:46.808 "iobuf_set_options", 00:17:46.808 "keyring_get_keys", 00:17:46.808 "framework_get_pci_devices", 00:17:46.808 "framework_get_config", 00:17:46.808 "framework_get_subsystems", 00:17:46.808 "fsdev_set_opts", 00:17:46.808 "fsdev_get_opts", 00:17:46.808 "trace_get_info", 00:17:46.808 "trace_get_tpoint_group_mask", 00:17:46.808 "trace_disable_tpoint_group", 00:17:46.808 "trace_enable_tpoint_group", 00:17:46.808 "trace_clear_tpoint_mask", 00:17:46.808 "trace_set_tpoint_mask", 00:17:46.808 "notify_get_notifications", 00:17:46.808 "notify_get_types", 00:17:46.808 "spdk_get_version", 00:17:46.808 "rpc_get_methods" 00:17:46.808 ] 00:17:47.066 12:49:29 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:17:47.066 12:49:29 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:47.066 12:49:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:47.066 12:49:29 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:47.066 12:49:29 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 56921 00:17:47.066 12:49:29 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 56921 ']' 00:17:47.066 12:49:29 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 56921 00:17:47.066 12:49:29 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:17:47.066 12:49:29 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:47.066 12:49:29 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56921 00:17:47.066 12:49:29 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:47.066 12:49:29 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:47.066 12:49:29 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56921' 00:17:47.066 killing process with pid 56921 00:17:47.066 12:49:29 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 56921 00:17:47.066 12:49:29 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 56921 00:17:48.565 00:17:48.565 real 0m2.839s 00:17:48.565 user 0m5.154s 00:17:48.565 sys 0m0.419s 00:17:48.565 12:49:30 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:48.565 12:49:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:48.565 ************************************ 00:17:48.565 END TEST spdkcli_tcp 00:17:48.565 ************************************ 00:17:48.565 12:49:31 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:17:48.565 12:49:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:48.565 12:49:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:48.565 12:49:31 -- common/autotest_common.sh@10 -- # set +x 00:17:48.565 ************************************ 00:17:48.565 START TEST dpdk_mem_utility 00:17:48.565 ************************************ 00:17:48.565 12:49:31 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:17:48.565 * Looking for test storage... 00:17:48.565 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:17:48.565 12:49:31 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:48.565 12:49:31 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:48.565 12:49:31 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:17:48.565 12:49:31 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:48.565 12:49:31 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:48.565 12:49:31 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:48.565 12:49:31 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:48.565 12:49:31 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:17:48.565 12:49:31 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:17:48.565 12:49:31 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:17:48.565 12:49:31 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:17:48.565 12:49:31 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:17:48.565 12:49:31 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:17:48.565 12:49:31 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:17:48.565 12:49:31 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:48.565 12:49:31 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:17:48.565 12:49:31 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:17:48.565 12:49:31 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:48.565 12:49:31 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:48.565 12:49:31 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:17:48.565 12:49:31 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:17:48.565 12:49:31 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:48.565 12:49:31 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:17:48.565 12:49:31 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:17:48.565 12:49:31 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:17:48.565 12:49:31 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:17:48.565 12:49:31 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:48.565 12:49:31 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:17:48.565 12:49:31 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:17:48.565 12:49:31 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:48.565 12:49:31 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:48.823 12:49:31 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:17:48.823 12:49:31 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:48.823 12:49:31 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:48.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.823 --rc genhtml_branch_coverage=1 00:17:48.823 --rc genhtml_function_coverage=1 00:17:48.823 --rc genhtml_legend=1 00:17:48.823 --rc geninfo_all_blocks=1 00:17:48.823 --rc geninfo_unexecuted_blocks=1 00:17:48.823 00:17:48.823 ' 00:17:48.823 12:49:31 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:48.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.823 --rc genhtml_branch_coverage=1 00:17:48.823 --rc genhtml_function_coverage=1 00:17:48.823 --rc genhtml_legend=1 00:17:48.823 --rc geninfo_all_blocks=1 00:17:48.823 --rc geninfo_unexecuted_blocks=1 00:17:48.823 00:17:48.823 ' 00:17:48.823 12:49:31 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:48.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.823 --rc genhtml_branch_coverage=1 00:17:48.823 --rc genhtml_function_coverage=1 00:17:48.823 --rc genhtml_legend=1 00:17:48.823 --rc geninfo_all_blocks=1 00:17:48.823 --rc geninfo_unexecuted_blocks=1 00:17:48.823 00:17:48.823 ' 00:17:48.823 12:49:31 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:48.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.823 --rc genhtml_branch_coverage=1 00:17:48.823 --rc genhtml_function_coverage=1 00:17:48.823 --rc genhtml_legend=1 00:17:48.823 --rc geninfo_all_blocks=1 00:17:48.823 --rc geninfo_unexecuted_blocks=1 00:17:48.823 00:17:48.823 ' 00:17:48.823 12:49:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:17:48.823 12:49:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57031 00:17:48.823 12:49:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:48.823 12:49:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57031 00:17:48.823 12:49:31 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57031 ']' 00:17:48.823 12:49:31 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.823 12:49:31 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:48.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.823 12:49:31 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.823 12:49:31 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:48.823 12:49:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:17:48.823 [2024-12-05 12:49:31.224681] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:17:48.824 [2024-12-05 12:49:31.224804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57031 ] 00:17:48.824 [2024-12-05 12:49:31.385911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.081 [2024-12-05 12:49:31.486515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.648 12:49:32 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:49.648 12:49:32 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:17:49.648 12:49:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:17:49.648 12:49:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:17:49.648 12:49:32 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.648 12:49:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:17:49.648 { 00:17:49.648 "filename": "/tmp/spdk_mem_dump.txt" 00:17:49.648 } 00:17:49.648 12:49:32 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.648 12:49:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:17:49.648 DPDK memory size 824.000000 MiB in 1 heap(s) 00:17:49.648 1 heaps totaling size 824.000000 MiB 00:17:49.648 size: 824.000000 MiB heap id: 0 00:17:49.648 end heaps---------- 00:17:49.648 9 mempools totaling size 603.782043 MiB 00:17:49.648 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:17:49.648 size: 158.602051 MiB name: PDU_data_out_Pool 00:17:49.648 size: 100.555481 MiB name: bdev_io_57031 00:17:49.648 size: 50.003479 MiB name: msgpool_57031 00:17:49.648 size: 36.509338 MiB name: fsdev_io_57031 00:17:49.648 size: 21.763794 MiB name: PDU_Pool 00:17:49.648 size: 19.513306 MiB name: SCSI_TASK_Pool 00:17:49.648 size: 4.133484 MiB name: evtpool_57031 00:17:49.648 size: 0.026123 MiB name: Session_Pool 00:17:49.648 end mempools------- 00:17:49.648 6 memzones totaling size 4.142822 MiB 00:17:49.648 size: 1.000366 MiB name: RG_ring_0_57031 00:17:49.648 size: 1.000366 MiB name: RG_ring_1_57031 00:17:49.648 size: 1.000366 MiB name: RG_ring_4_57031 00:17:49.648 size: 1.000366 MiB name: RG_ring_5_57031 00:17:49.648 size: 0.125366 MiB name: RG_ring_2_57031 00:17:49.648 size: 0.015991 MiB name: RG_ring_3_57031 00:17:49.648 end memzones------- 00:17:49.648 12:49:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:17:49.648 heap id: 0 total size: 824.000000 MiB number of busy elements: 320 number of free elements: 18 00:17:49.648 list of free elements. size: 16.780151 MiB 00:17:49.648 element at address: 0x200006400000 with size: 1.995972 MiB 00:17:49.648 element at address: 0x20000a600000 with size: 1.995972 MiB 00:17:49.648 element at address: 0x200003e00000 with size: 1.991028 MiB 00:17:49.648 element at address: 0x200019500040 with size: 0.999939 MiB 00:17:49.648 element at address: 0x200019900040 with size: 0.999939 MiB 00:17:49.648 element at address: 0x200019a00000 with size: 0.999084 MiB 00:17:49.648 element at address: 0x200032600000 with size: 0.994324 MiB 00:17:49.648 element at address: 0x200000400000 with size: 0.992004 MiB 00:17:49.648 element at address: 0x200019200000 with size: 0.959656 MiB 00:17:49.648 element at address: 0x200019d00040 with size: 0.936401 MiB 00:17:49.648 element at address: 0x200000200000 with size: 0.716980 MiB 00:17:49.648 element at address: 0x20001b400000 with size: 0.560730 MiB 00:17:49.648 element at address: 0x200000c00000 with size: 0.489197 MiB 00:17:49.648 element at address: 0x200019600000 with size: 0.487976 MiB 00:17:49.648 element at address: 0x200019e00000 with size: 0.485413 MiB 00:17:49.648 element at address: 0x200012c00000 with size: 0.434204 MiB 00:17:49.648 element at address: 0x200028800000 with size: 0.390442 MiB 00:17:49.648 element at address: 0x200000800000 with size: 0.350891 MiB 00:17:49.648 list of standard malloc elements. size: 199.288940 MiB 00:17:49.648 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:17:49.648 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:17:49.648 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:17:49.648 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:17:49.648 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:17:49.649 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:17:49.649 element at address: 0x200019deff40 with size: 0.062683 MiB 00:17:49.649 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:17:49.649 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:17:49.649 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:17:49.649 element at address: 0x200012bff040 with size: 0.000305 MiB 00:17:49.649 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:17:49.649 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:17:49.649 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:17:49.649 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:17:49.649 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:17:49.649 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:17:49.649 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:17:49.649 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:17:49.649 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:17:49.649 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:17:49.649 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:17:49.649 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:17:49.649 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:17:49.649 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:17:49.649 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:17:49.649 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:17:49.649 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:17:49.649 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:17:49.649 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:17:49.649 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:17:49.649 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:17:49.649 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:17:49.649 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:17:49.649 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:17:49.649 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:17:49.649 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:17:49.649 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:17:49.649 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:17:49.649 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:17:49.649 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:17:49.649 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200000cff000 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200012bff180 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200012bff280 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200012bff380 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200012bff480 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200012bff580 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200012bff680 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200012bff780 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200012bff880 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200012bff980 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200019affc40 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:17:49.649 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:17:49.650 element at address: 0x200028863f40 with size: 0.000244 MiB 00:17:49.650 element at address: 0x200028864040 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886af80 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886b080 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886b180 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886b280 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886b380 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886b480 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886b580 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886b680 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886b780 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886b880 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886b980 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886be80 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886c080 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886c180 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886c280 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886c380 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886c480 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886c580 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886c680 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886c780 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886c880 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886c980 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886d080 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886d180 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886d280 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886d380 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886d480 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886d580 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886d680 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886d780 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886d880 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886d980 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886da80 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886db80 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886de80 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886df80 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886e080 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886e180 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886e280 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886e380 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886e480 with size: 0.000244 MiB 00:17:49.650 element at address: 0x20002886e580 with size: 0.000244 MiB 00:17:49.651 element at address: 0x20002886e680 with size: 0.000244 MiB 00:17:49.651 element at address: 0x20002886e780 with size: 0.000244 MiB 00:17:49.651 element at address: 0x20002886e880 with size: 0.000244 MiB 00:17:49.651 element at address: 0x20002886e980 with size: 0.000244 MiB 00:17:49.651 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:17:49.651 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:17:49.651 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:17:49.651 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:17:49.651 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:17:49.651 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:17:49.651 element at address: 0x20002886f080 with size: 0.000244 MiB 00:17:49.651 element at address: 0x20002886f180 with size: 0.000244 MiB 00:17:49.651 element at address: 0x20002886f280 with size: 0.000244 MiB 00:17:49.651 element at address: 0x20002886f380 with size: 0.000244 MiB 00:17:49.651 element at address: 0x20002886f480 with size: 0.000244 MiB 00:17:49.651 element at address: 0x20002886f580 with size: 0.000244 MiB 00:17:49.651 element at address: 0x20002886f680 with size: 0.000244 MiB 00:17:49.651 element at address: 0x20002886f780 with size: 0.000244 MiB 00:17:49.651 element at address: 0x20002886f880 with size: 0.000244 MiB 00:17:49.651 element at address: 0x20002886f980 with size: 0.000244 MiB 00:17:49.651 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:17:49.651 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:17:49.651 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:17:49.651 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:17:49.651 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:17:49.651 list of memzone associated elements. size: 607.930908 MiB 00:17:49.651 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:17:49.651 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:17:49.651 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:17:49.651 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:17:49.651 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:17:49.651 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57031_0 00:17:49.651 element at address: 0x200000dff340 with size: 48.003113 MiB 00:17:49.651 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57031_0 00:17:49.651 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:17:49.651 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57031_0 00:17:49.651 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:17:49.651 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:17:49.651 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:17:49.651 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:17:49.651 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:17:49.651 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57031_0 00:17:49.651 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:17:49.651 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57031 00:17:49.651 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:17:49.651 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57031 00:17:49.651 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:17:49.651 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:17:49.651 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:17:49.651 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:17:49.651 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:17:49.651 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:17:49.651 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:17:49.651 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:17:49.651 element at address: 0x200000cff100 with size: 1.000549 MiB 00:17:49.651 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57031 00:17:49.651 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:17:49.651 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57031 00:17:49.651 element at address: 0x200019affd40 with size: 1.000549 MiB 00:17:49.651 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57031 00:17:49.651 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:17:49.651 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57031 00:17:49.651 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:17:49.651 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57031 00:17:49.651 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:17:49.651 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57031 00:17:49.651 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:17:49.651 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:17:49.651 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:17:49.651 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:17:49.651 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:17:49.651 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:17:49.651 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:17:49.651 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57031 00:17:49.651 element at address: 0x20000085df80 with size: 0.125549 MiB 00:17:49.651 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57031 00:17:49.651 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:17:49.651 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:17:49.651 element at address: 0x200028864140 with size: 0.023804 MiB 00:17:49.651 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:17:49.651 element at address: 0x200000859d40 with size: 0.016174 MiB 00:17:49.651 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57031 00:17:49.651 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:17:49.651 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:17:49.651 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:17:49.651 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57031 00:17:49.651 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:17:49.651 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57031 00:17:49.651 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:17:49.651 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57031 00:17:49.651 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:17:49.651 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:17:49.651 12:49:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:17:49.651 12:49:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57031 00:17:49.651 12:49:32 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57031 ']' 00:17:49.651 12:49:32 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57031 00:17:49.651 12:49:32 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:17:49.651 12:49:32 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:49.651 12:49:32 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57031 00:17:49.651 killing process with pid 57031 00:17:49.651 12:49:32 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:49.651 12:49:32 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:49.651 12:49:32 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57031' 00:17:49.651 12:49:32 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57031 00:17:49.651 12:49:32 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57031 00:17:51.569 00:17:51.569 real 0m2.717s 00:17:51.569 user 0m2.763s 00:17:51.569 sys 0m0.381s 00:17:51.569 12:49:33 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:51.569 12:49:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:17:51.569 ************************************ 00:17:51.569 END TEST dpdk_mem_utility 00:17:51.569 ************************************ 00:17:51.569 12:49:33 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:17:51.569 12:49:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:51.569 12:49:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:51.569 12:49:33 -- common/autotest_common.sh@10 -- # set +x 00:17:51.569 ************************************ 00:17:51.569 START TEST event 00:17:51.569 ************************************ 00:17:51.569 12:49:33 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:17:51.569 * Looking for test storage... 00:17:51.569 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:17:51.569 12:49:33 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:51.569 12:49:33 event -- common/autotest_common.sh@1711 -- # lcov --version 00:17:51.569 12:49:33 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:51.569 12:49:33 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:51.569 12:49:33 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:51.569 12:49:33 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:51.569 12:49:33 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:51.569 12:49:33 event -- scripts/common.sh@336 -- # IFS=.-: 00:17:51.569 12:49:33 event -- scripts/common.sh@336 -- # read -ra ver1 00:17:51.569 12:49:33 event -- scripts/common.sh@337 -- # IFS=.-: 00:17:51.569 12:49:33 event -- scripts/common.sh@337 -- # read -ra ver2 00:17:51.569 12:49:33 event -- scripts/common.sh@338 -- # local 'op=<' 00:17:51.569 12:49:33 event -- scripts/common.sh@340 -- # ver1_l=2 00:17:51.569 12:49:33 event -- scripts/common.sh@341 -- # ver2_l=1 00:17:51.569 12:49:33 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:51.569 12:49:33 event -- scripts/common.sh@344 -- # case "$op" in 00:17:51.569 12:49:33 event -- scripts/common.sh@345 -- # : 1 00:17:51.569 12:49:33 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:51.569 12:49:33 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:51.569 12:49:33 event -- scripts/common.sh@365 -- # decimal 1 00:17:51.569 12:49:33 event -- scripts/common.sh@353 -- # local d=1 00:17:51.569 12:49:33 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:51.569 12:49:33 event -- scripts/common.sh@355 -- # echo 1 00:17:51.569 12:49:33 event -- scripts/common.sh@365 -- # ver1[v]=1 00:17:51.569 12:49:33 event -- scripts/common.sh@366 -- # decimal 2 00:17:51.569 12:49:33 event -- scripts/common.sh@353 -- # local d=2 00:17:51.569 12:49:33 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:51.569 12:49:33 event -- scripts/common.sh@355 -- # echo 2 00:17:51.569 12:49:33 event -- scripts/common.sh@366 -- # ver2[v]=2 00:17:51.569 12:49:33 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:51.569 12:49:33 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:51.569 12:49:33 event -- scripts/common.sh@368 -- # return 0 00:17:51.569 12:49:33 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:51.569 12:49:33 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:51.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.569 --rc genhtml_branch_coverage=1 00:17:51.569 --rc genhtml_function_coverage=1 00:17:51.569 --rc genhtml_legend=1 00:17:51.569 --rc geninfo_all_blocks=1 00:17:51.569 --rc geninfo_unexecuted_blocks=1 00:17:51.569 00:17:51.569 ' 00:17:51.569 12:49:33 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:51.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.569 --rc genhtml_branch_coverage=1 00:17:51.569 --rc genhtml_function_coverage=1 00:17:51.569 --rc genhtml_legend=1 00:17:51.569 --rc geninfo_all_blocks=1 00:17:51.569 --rc geninfo_unexecuted_blocks=1 00:17:51.569 00:17:51.569 ' 00:17:51.569 12:49:33 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:51.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.569 --rc genhtml_branch_coverage=1 00:17:51.569 --rc genhtml_function_coverage=1 00:17:51.569 --rc genhtml_legend=1 00:17:51.569 --rc geninfo_all_blocks=1 00:17:51.569 --rc geninfo_unexecuted_blocks=1 00:17:51.569 00:17:51.569 ' 00:17:51.569 12:49:33 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:51.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.569 --rc genhtml_branch_coverage=1 00:17:51.569 --rc genhtml_function_coverage=1 00:17:51.569 --rc genhtml_legend=1 00:17:51.569 --rc geninfo_all_blocks=1 00:17:51.569 --rc geninfo_unexecuted_blocks=1 00:17:51.569 00:17:51.569 ' 00:17:51.569 12:49:33 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:17:51.569 12:49:33 event -- bdev/nbd_common.sh@6 -- # set -e 00:17:51.569 12:49:33 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:17:51.569 12:49:33 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:17:51.569 12:49:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:51.569 12:49:33 event -- common/autotest_common.sh@10 -- # set +x 00:17:51.569 ************************************ 00:17:51.569 START TEST event_perf 00:17:51.569 ************************************ 00:17:51.569 12:49:33 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:17:51.569 Running I/O for 1 seconds...[2024-12-05 12:49:33.961335] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:17:51.569 [2024-12-05 12:49:33.961775] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57123 ] 00:17:51.569 [2024-12-05 12:49:34.122092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:51.826 [2024-12-05 12:49:34.226674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.826 [2024-12-05 12:49:34.226745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:51.826 [2024-12-05 12:49:34.226824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.826 Running I/O for 1 seconds...[2024-12-05 12:49:34.226895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:53.196 00:17:53.196 lcore 0: 195689 00:17:53.196 lcore 1: 195687 00:17:53.196 lcore 2: 195688 00:17:53.196 lcore 3: 195687 00:17:53.196 done. 00:17:53.196 ************************************ 00:17:53.196 END TEST event_perf 00:17:53.196 ************************************ 00:17:53.196 00:17:53.196 real 0m1.471s 00:17:53.196 user 0m4.263s 00:17:53.197 sys 0m0.083s 00:17:53.197 12:49:35 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:53.197 12:49:35 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:17:53.197 12:49:35 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:17:53.197 12:49:35 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:53.197 12:49:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:53.197 12:49:35 event -- common/autotest_common.sh@10 -- # set +x 00:17:53.197 ************************************ 00:17:53.197 START TEST event_reactor 00:17:53.197 ************************************ 00:17:53.197 12:49:35 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:17:53.197 [2024-12-05 12:49:35.465545] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:17:53.197 [2024-12-05 12:49:35.465667] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57168 ] 00:17:53.197 [2024-12-05 12:49:35.625746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.197 [2024-12-05 12:49:35.726937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.566 test_start 00:17:54.566 oneshot 00:17:54.566 tick 100 00:17:54.566 tick 100 00:17:54.566 tick 250 00:17:54.566 tick 100 00:17:54.566 tick 100 00:17:54.566 tick 250 00:17:54.566 tick 100 00:17:54.566 tick 500 00:17:54.566 tick 100 00:17:54.566 tick 100 00:17:54.566 tick 250 00:17:54.566 tick 100 00:17:54.566 tick 100 00:17:54.566 test_end 00:17:54.566 00:17:54.566 real 0m1.449s 00:17:54.566 user 0m1.275s 00:17:54.566 sys 0m0.066s 00:17:54.566 ************************************ 00:17:54.566 END TEST event_reactor 00:17:54.566 ************************************ 00:17:54.566 12:49:36 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:54.566 12:49:36 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:17:54.566 12:49:36 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:17:54.566 12:49:36 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:54.566 12:49:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:54.566 12:49:36 event -- common/autotest_common.sh@10 -- # set +x 00:17:54.566 ************************************ 00:17:54.566 START TEST event_reactor_perf 00:17:54.566 ************************************ 00:17:54.566 12:49:36 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:17:54.566 [2024-12-05 12:49:36.951353] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:17:54.566 [2024-12-05 12:49:36.951674] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57199 ] 00:17:54.566 [2024-12-05 12:49:37.113138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.829 [2024-12-05 12:49:37.214043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.205 test_start 00:17:56.205 test_end 00:17:56.205 Performance: 310553 events per second 00:17:56.205 00:17:56.205 real 0m1.447s 00:17:56.205 user 0m1.271s 00:17:56.205 sys 0m0.066s 00:17:56.205 ************************************ 00:17:56.205 END TEST event_reactor_perf 00:17:56.205 ************************************ 00:17:56.205 12:49:38 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:56.205 12:49:38 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:17:56.205 12:49:38 event -- event/event.sh@49 -- # uname -s 00:17:56.205 12:49:38 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:17:56.205 12:49:38 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:17:56.205 12:49:38 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:56.205 12:49:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:56.205 12:49:38 event -- common/autotest_common.sh@10 -- # set +x 00:17:56.205 ************************************ 00:17:56.205 START TEST event_scheduler 00:17:56.205 ************************************ 00:17:56.205 12:49:38 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:17:56.205 * Looking for test storage... 00:17:56.205 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:17:56.205 12:49:38 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:56.205 12:49:38 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:17:56.205 12:49:38 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:56.205 12:49:38 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:56.205 12:49:38 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:56.205 12:49:38 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:56.205 12:49:38 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:56.205 12:49:38 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:17:56.205 12:49:38 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:17:56.205 12:49:38 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:17:56.205 12:49:38 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:17:56.205 12:49:38 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:17:56.205 12:49:38 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:17:56.205 12:49:38 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:17:56.205 12:49:38 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:56.205 12:49:38 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:17:56.205 12:49:38 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:17:56.205 12:49:38 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:56.205 12:49:38 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:56.205 12:49:38 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:17:56.205 12:49:38 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:17:56.205 12:49:38 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:56.206 12:49:38 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:17:56.206 12:49:38 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:17:56.206 12:49:38 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:17:56.206 12:49:38 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:17:56.206 12:49:38 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:56.206 12:49:38 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:17:56.206 12:49:38 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:17:56.206 12:49:38 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:56.206 12:49:38 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:56.206 12:49:38 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:17:56.206 12:49:38 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:56.206 12:49:38 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:56.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.206 --rc genhtml_branch_coverage=1 00:17:56.206 --rc genhtml_function_coverage=1 00:17:56.206 --rc genhtml_legend=1 00:17:56.206 --rc geninfo_all_blocks=1 00:17:56.206 --rc geninfo_unexecuted_blocks=1 00:17:56.206 00:17:56.206 ' 00:17:56.206 12:49:38 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:56.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.206 --rc genhtml_branch_coverage=1 00:17:56.206 --rc genhtml_function_coverage=1 00:17:56.206 --rc genhtml_legend=1 00:17:56.206 --rc geninfo_all_blocks=1 00:17:56.206 --rc geninfo_unexecuted_blocks=1 00:17:56.206 00:17:56.206 ' 00:17:56.206 12:49:38 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:56.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.206 --rc genhtml_branch_coverage=1 00:17:56.206 --rc genhtml_function_coverage=1 00:17:56.206 --rc genhtml_legend=1 00:17:56.206 --rc geninfo_all_blocks=1 00:17:56.206 --rc geninfo_unexecuted_blocks=1 00:17:56.206 00:17:56.206 ' 00:17:56.206 12:49:38 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:56.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.206 --rc genhtml_branch_coverage=1 00:17:56.206 --rc genhtml_function_coverage=1 00:17:56.206 --rc genhtml_legend=1 00:17:56.206 --rc geninfo_all_blocks=1 00:17:56.206 --rc geninfo_unexecuted_blocks=1 00:17:56.206 00:17:56.206 ' 00:17:56.206 12:49:38 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:17:56.206 12:49:38 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=57275 00:17:56.206 12:49:38 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:17:56.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.206 12:49:38 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 57275 00:17:56.206 12:49:38 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:17:56.206 12:49:38 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 57275 ']' 00:17:56.206 12:49:38 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.206 12:49:38 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:56.206 12:49:38 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.206 12:49:38 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:56.206 12:49:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:17:56.206 [2024-12-05 12:49:38.604295] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:17:56.206 [2024-12-05 12:49:38.604401] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57275 ] 00:17:56.206 [2024-12-05 12:49:38.758294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:56.465 [2024-12-05 12:49:38.897816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.465 [2024-12-05 12:49:38.897928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:56.465 [2024-12-05 12:49:38.898037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:56.465 [2024-12-05 12:49:38.898104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.033 12:49:39 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:57.033 12:49:39 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:17:57.033 12:49:39 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:17:57.033 12:49:39 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.033 12:49:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:17:57.033 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:17:57.033 POWER: Cannot set governor of lcore 0 to userspace 00:17:57.033 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:17:57.033 POWER: Cannot set governor of lcore 0 to performance 00:17:57.033 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:17:57.033 POWER: Cannot set governor of lcore 0 to userspace 00:17:57.033 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:17:57.033 POWER: Cannot set governor of lcore 0 to userspace 00:17:57.033 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:17:57.033 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:17:57.033 POWER: Unable to set Power Management Environment for lcore 0 00:17:57.033 [2024-12-05 12:49:39.456051] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:17:57.033 [2024-12-05 12:49:39.456071] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:17:57.033 [2024-12-05 12:49:39.456080] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:17:57.033 [2024-12-05 12:49:39.456098] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:17:57.033 [2024-12-05 12:49:39.456105] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:17:57.033 [2024-12-05 12:49:39.456114] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:17:57.033 12:49:39 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.033 12:49:39 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:17:57.033 12:49:39 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.033 12:49:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:17:57.292 [2024-12-05 12:49:39.683354] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:17:57.292 12:49:39 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.292 12:49:39 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:17:57.292 12:49:39 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:57.292 12:49:39 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:57.292 12:49:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:17:57.292 ************************************ 00:17:57.292 START TEST scheduler_create_thread 00:17:57.292 ************************************ 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:57.292 2 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:57.292 3 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:57.292 4 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:57.292 5 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:57.292 6 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:57.292 7 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:57.292 8 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:57.292 9 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:57.292 10 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.292 12:49:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:58.827 12:49:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.827 12:49:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:17:58.827 12:49:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:17:58.827 12:49:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.827 12:49:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:59.765 ************************************ 00:17:59.765 END TEST scheduler_create_thread 00:17:59.765 ************************************ 00:17:59.765 12:49:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.765 00:17:59.765 real 0m2.618s 00:17:59.765 user 0m0.019s 00:17:59.765 sys 0m0.002s 00:17:59.765 12:49:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:59.765 12:49:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:59.765 12:49:42 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:17:59.765 12:49:42 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 57275 00:17:59.765 12:49:42 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 57275 ']' 00:17:59.765 12:49:42 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 57275 00:18:00.026 12:49:42 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:18:00.026 12:49:42 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:00.026 12:49:42 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57275 00:18:00.026 killing process with pid 57275 00:18:00.026 12:49:42 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:00.026 12:49:42 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:00.026 12:49:42 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57275' 00:18:00.026 12:49:42 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 57275 00:18:00.026 12:49:42 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 57275 00:18:00.284 [2024-12-05 12:49:42.794040] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:18:00.932 00:18:00.932 real 0m5.000s 00:18:00.932 user 0m8.726s 00:18:00.932 sys 0m0.337s 00:18:00.932 12:49:43 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:00.932 ************************************ 00:18:00.932 END TEST event_scheduler 00:18:00.932 ************************************ 00:18:00.932 12:49:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:18:00.932 12:49:43 event -- event/event.sh@51 -- # modprobe -n nbd 00:18:00.932 12:49:43 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:18:00.932 12:49:43 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:00.933 12:49:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:00.933 12:49:43 event -- common/autotest_common.sh@10 -- # set +x 00:18:00.933 ************************************ 00:18:00.933 START TEST app_repeat 00:18:00.933 ************************************ 00:18:00.933 12:49:43 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:18:00.933 12:49:43 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:00.933 12:49:43 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:00.933 12:49:43 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:18:00.933 12:49:43 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:00.933 12:49:43 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:18:00.933 12:49:43 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:18:00.933 12:49:43 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:18:00.933 Process app_repeat pid: 57375 00:18:00.933 spdk_app_start Round 0 00:18:00.933 12:49:43 event.app_repeat -- event/event.sh@19 -- # repeat_pid=57375 00:18:00.933 12:49:43 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:18:00.933 12:49:43 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57375' 00:18:00.933 12:49:43 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:18:00.933 12:49:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:18:00.933 12:49:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:18:00.933 12:49:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57375 /var/tmp/spdk-nbd.sock 00:18:00.933 12:49:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 57375 ']' 00:18:00.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:00.933 12:49:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:00.933 12:49:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:00.933 12:49:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:00.933 12:49:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:00.933 12:49:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:18:01.206 [2024-12-05 12:49:43.506215] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:18:01.206 [2024-12-05 12:49:43.506381] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57375 ] 00:18:01.206 [2024-12-05 12:49:43.685972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:01.206 [2024-12-05 12:49:43.786254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.206 [2024-12-05 12:49:43.786545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.148 12:49:44 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:02.148 12:49:44 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:18:02.148 12:49:44 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:18:02.148 Malloc0 00:18:02.148 12:49:44 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:18:02.408 Malloc1 00:18:02.408 12:49:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:18:02.408 12:49:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:02.408 12:49:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:02.408 12:49:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:02.408 12:49:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:02.408 12:49:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:02.408 12:49:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:18:02.408 12:49:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:02.408 12:49:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:02.408 12:49:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:02.408 12:49:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:02.408 12:49:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:02.408 12:49:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:18:02.408 12:49:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:02.408 12:49:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:02.408 12:49:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:18:02.670 /dev/nbd0 00:18:02.670 12:49:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:02.670 12:49:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:02.670 12:49:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:02.670 12:49:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:18:02.670 12:49:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:02.670 12:49:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:02.670 12:49:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:02.670 12:49:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:18:02.670 12:49:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:02.670 12:49:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:02.670 12:49:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:18:02.670 1+0 records in 00:18:02.670 1+0 records out 00:18:02.670 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243779 s, 16.8 MB/s 00:18:02.670 12:49:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:02.670 12:49:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:18:02.670 12:49:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:02.670 12:49:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:02.670 12:49:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:18:02.670 12:49:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:02.670 12:49:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:02.670 12:49:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:18:02.932 /dev/nbd1 00:18:02.932 12:49:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:02.932 12:49:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:02.932 12:49:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:02.932 12:49:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:18:02.932 12:49:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:02.932 12:49:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:02.932 12:49:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:02.932 12:49:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:18:02.932 12:49:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:02.932 12:49:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:02.932 12:49:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:18:02.932 1+0 records in 00:18:02.932 1+0 records out 00:18:02.932 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196128 s, 20.9 MB/s 00:18:02.932 12:49:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:02.932 12:49:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:18:02.932 12:49:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:02.932 12:49:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:02.932 12:49:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:18:02.932 12:49:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:02.932 12:49:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:02.932 12:49:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:02.932 12:49:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:02.932 12:49:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:02.932 12:49:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:02.932 { 00:18:02.932 "nbd_device": "/dev/nbd0", 00:18:02.932 "bdev_name": "Malloc0" 00:18:02.932 }, 00:18:02.932 { 00:18:02.932 "nbd_device": "/dev/nbd1", 00:18:02.932 "bdev_name": "Malloc1" 00:18:02.932 } 00:18:02.932 ]' 00:18:02.932 12:49:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:02.932 { 00:18:02.932 "nbd_device": "/dev/nbd0", 00:18:02.932 "bdev_name": "Malloc0" 00:18:02.932 }, 00:18:02.932 { 00:18:02.932 "nbd_device": "/dev/nbd1", 00:18:02.932 "bdev_name": "Malloc1" 00:18:02.932 } 00:18:02.932 ]' 00:18:02.932 12:49:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:02.932 12:49:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:18:02.932 /dev/nbd1' 00:18:02.932 12:49:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:02.932 12:49:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:18:02.932 /dev/nbd1' 00:18:02.932 12:49:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:18:02.932 12:49:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:18:02.932 12:49:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:18:02.932 12:49:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:18:02.932 12:49:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:18:02.932 12:49:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:02.932 12:49:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:02.932 12:49:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:02.932 12:49:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:18:02.932 12:49:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:02.932 12:49:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:18:02.932 256+0 records in 00:18:02.932 256+0 records out 00:18:02.932 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00714327 s, 147 MB/s 00:18:02.932 12:49:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:02.932 12:49:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:03.190 256+0 records in 00:18:03.191 256+0 records out 00:18:03.191 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206557 s, 50.8 MB/s 00:18:03.191 12:49:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:03.191 12:49:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:18:03.191 256+0 records in 00:18:03.191 256+0 records out 00:18:03.191 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213427 s, 49.1 MB/s 00:18:03.191 12:49:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:18:03.191 12:49:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:03.191 12:49:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:03.191 12:49:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:03.191 12:49:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:18:03.191 12:49:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:03.191 12:49:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:03.191 12:49:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:03.191 12:49:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:18:03.191 12:49:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:03.191 12:49:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:18:03.191 12:49:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:18:03.191 12:49:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:18:03.191 12:49:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:03.191 12:49:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:03.191 12:49:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:03.191 12:49:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:18:03.191 12:49:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:03.191 12:49:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:03.451 12:49:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:03.451 12:49:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:03.451 12:49:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:03.451 12:49:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:03.451 12:49:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:03.451 12:49:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:03.451 12:49:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:18:03.451 12:49:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:18:03.451 12:49:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:03.451 12:49:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:03.451 12:49:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:03.451 12:49:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:03.451 12:49:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:03.451 12:49:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:03.451 12:49:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:03.451 12:49:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:03.451 12:49:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:18:03.451 12:49:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:18:03.451 12:49:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:03.451 12:49:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:03.451 12:49:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:03.712 12:49:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:03.712 12:49:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:03.712 12:49:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:03.712 12:49:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:03.712 12:49:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:03.712 12:49:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:18:03.712 12:49:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:18:03.712 12:49:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:18:03.712 12:49:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:18:03.712 12:49:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:18:03.712 12:49:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:03.712 12:49:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:18:03.712 12:49:46 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:18:04.283 12:49:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:18:04.856 [2024-12-05 12:49:47.321682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:04.856 [2024-12-05 12:49:47.417740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.856 [2024-12-05 12:49:47.418014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.114 [2024-12-05 12:49:47.539119] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:18:05.114 [2024-12-05 12:49:47.539205] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:18:07.045 spdk_app_start Round 1 00:18:07.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:07.045 12:49:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:18:07.045 12:49:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:18:07.045 12:49:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57375 /var/tmp/spdk-nbd.sock 00:18:07.045 12:49:49 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 57375 ']' 00:18:07.045 12:49:49 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:07.045 12:49:49 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:07.045 12:49:49 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:07.045 12:49:49 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:07.045 12:49:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:18:07.305 12:49:49 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:07.305 12:49:49 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:18:07.305 12:49:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:18:07.616 Malloc0 00:18:07.616 12:49:50 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:18:07.877 Malloc1 00:18:07.877 12:49:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:18:07.877 12:49:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:07.877 12:49:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:07.877 12:49:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:07.877 12:49:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:07.877 12:49:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:07.877 12:49:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:18:07.877 12:49:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:07.877 12:49:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:07.877 12:49:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:07.877 12:49:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:07.877 12:49:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:07.877 12:49:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:18:07.877 12:49:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:07.877 12:49:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:07.877 12:49:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:18:07.877 /dev/nbd0 00:18:08.138 12:49:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:08.138 12:49:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:08.138 12:49:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:08.138 12:49:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:18:08.138 12:49:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:08.138 12:49:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:08.138 12:49:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:08.138 12:49:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:18:08.138 12:49:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:08.138 12:49:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:08.138 12:49:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:18:08.138 1+0 records in 00:18:08.138 1+0 records out 00:18:08.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000178798 s, 22.9 MB/s 00:18:08.138 12:49:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:08.138 12:49:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:18:08.138 12:49:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:08.138 12:49:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:08.138 12:49:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:18:08.138 12:49:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:08.138 12:49:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:08.138 12:49:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:18:08.138 /dev/nbd1 00:18:08.138 12:49:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:08.138 12:49:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:08.138 12:49:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:08.138 12:49:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:18:08.138 12:49:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:08.138 12:49:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:08.138 12:49:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:08.138 12:49:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:18:08.138 12:49:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:08.138 12:49:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:08.138 12:49:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:18:08.138 1+0 records in 00:18:08.138 1+0 records out 00:18:08.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182427 s, 22.5 MB/s 00:18:08.138 12:49:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:08.138 12:49:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:18:08.138 12:49:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:08.138 12:49:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:08.138 12:49:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:18:08.138 12:49:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:08.138 12:49:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:08.138 12:49:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:08.138 12:49:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:08.138 12:49:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:08.398 12:49:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:08.398 { 00:18:08.398 "nbd_device": "/dev/nbd0", 00:18:08.398 "bdev_name": "Malloc0" 00:18:08.398 }, 00:18:08.398 { 00:18:08.398 "nbd_device": "/dev/nbd1", 00:18:08.398 "bdev_name": "Malloc1" 00:18:08.398 } 00:18:08.398 ]' 00:18:08.398 12:49:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:08.398 12:49:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:08.398 { 00:18:08.398 "nbd_device": "/dev/nbd0", 00:18:08.398 "bdev_name": "Malloc0" 00:18:08.398 }, 00:18:08.398 { 00:18:08.398 "nbd_device": "/dev/nbd1", 00:18:08.398 "bdev_name": "Malloc1" 00:18:08.398 } 00:18:08.398 ]' 00:18:08.398 12:49:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:18:08.398 /dev/nbd1' 00:18:08.398 12:49:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:08.398 12:49:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:18:08.398 /dev/nbd1' 00:18:08.398 12:49:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:18:08.398 12:49:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:18:08.398 12:49:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:18:08.398 12:49:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:18:08.398 12:49:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:18:08.398 12:49:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:08.398 12:49:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:08.398 12:49:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:08.398 12:49:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:18:08.398 12:49:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:08.398 12:49:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:18:08.398 256+0 records in 00:18:08.398 256+0 records out 00:18:08.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00989573 s, 106 MB/s 00:18:08.398 12:49:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:08.398 12:49:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:08.399 256+0 records in 00:18:08.399 256+0 records out 00:18:08.399 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143085 s, 73.3 MB/s 00:18:08.399 12:49:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:08.399 12:49:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:18:08.399 256+0 records in 00:18:08.399 256+0 records out 00:18:08.399 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205808 s, 50.9 MB/s 00:18:08.660 12:49:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:18:08.660 12:49:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:08.660 12:49:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:08.660 12:49:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:08.660 12:49:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:18:08.660 12:49:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:08.660 12:49:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:08.660 12:49:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:08.660 12:49:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:18:08.660 12:49:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:08.660 12:49:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:18:08.660 12:49:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:18:08.660 12:49:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:18:08.660 12:49:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:08.660 12:49:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:08.660 12:49:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:08.660 12:49:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:18:08.660 12:49:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:08.660 12:49:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:08.660 12:49:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:08.660 12:49:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:08.660 12:49:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:08.660 12:49:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:08.660 12:49:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:08.660 12:49:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:08.660 12:49:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:18:08.660 12:49:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:18:08.660 12:49:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:08.660 12:49:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:08.922 12:49:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:08.922 12:49:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:08.922 12:49:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:08.922 12:49:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:08.922 12:49:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:08.922 12:49:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:08.922 12:49:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:18:08.922 12:49:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:18:08.922 12:49:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:08.922 12:49:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:08.922 12:49:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:09.183 12:49:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:09.183 12:49:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:09.183 12:49:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:09.183 12:49:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:09.183 12:49:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:18:09.183 12:49:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:09.183 12:49:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:18:09.184 12:49:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:18:09.184 12:49:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:18:09.184 12:49:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:18:09.184 12:49:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:09.184 12:49:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:18:09.184 12:49:51 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:18:09.444 12:49:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:18:10.016 [2024-12-05 12:49:52.521936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:10.278 [2024-12-05 12:49:52.606699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.278 [2024-12-05 12:49:52.606931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.278 [2024-12-05 12:49:52.712571] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:18:10.278 [2024-12-05 12:49:52.712626] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:18:12.850 spdk_app_start Round 2 00:18:12.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:12.850 12:49:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:18:12.850 12:49:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:18:12.850 12:49:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 57375 /var/tmp/spdk-nbd.sock 00:18:12.850 12:49:54 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 57375 ']' 00:18:12.850 12:49:54 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:12.850 12:49:54 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:12.850 12:49:54 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:12.850 12:49:54 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:12.850 12:49:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:18:12.850 12:49:55 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:12.850 12:49:55 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:18:12.850 12:49:55 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:18:12.850 Malloc0 00:18:12.850 12:49:55 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:18:13.109 Malloc1 00:18:13.109 12:49:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:18:13.109 12:49:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:13.109 12:49:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:13.109 12:49:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:13.109 12:49:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:13.109 12:49:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:13.109 12:49:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:18:13.109 12:49:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:13.109 12:49:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:13.109 12:49:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:13.109 12:49:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:13.109 12:49:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:13.109 12:49:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:18:13.109 12:49:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:13.109 12:49:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:13.110 12:49:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:18:13.370 /dev/nbd0 00:18:13.370 12:49:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:13.370 12:49:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:13.370 12:49:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:13.370 12:49:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:18:13.370 12:49:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:13.370 12:49:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:13.370 12:49:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:13.370 12:49:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:18:13.370 12:49:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:13.370 12:49:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:13.370 12:49:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:18:13.370 1+0 records in 00:18:13.370 1+0 records out 00:18:13.370 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224445 s, 18.2 MB/s 00:18:13.370 12:49:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:13.370 12:49:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:18:13.370 12:49:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:13.370 12:49:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:13.370 12:49:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:18:13.370 12:49:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:13.370 12:49:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:13.370 12:49:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:18:13.629 /dev/nbd1 00:18:13.629 12:49:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:13.629 12:49:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:13.629 12:49:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:13.629 12:49:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:18:13.629 12:49:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:13.629 12:49:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:13.629 12:49:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:13.629 12:49:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:18:13.629 12:49:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:13.629 12:49:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:13.629 12:49:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:18:13.629 1+0 records in 00:18:13.629 1+0 records out 00:18:13.629 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293232 s, 14.0 MB/s 00:18:13.629 12:49:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:13.629 12:49:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:18:13.629 12:49:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:13.629 12:49:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:13.629 12:49:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:18:13.629 12:49:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:13.629 12:49:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:13.629 12:49:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:13.629 12:49:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:13.629 12:49:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:13.629 12:49:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:13.629 { 00:18:13.629 "nbd_device": "/dev/nbd0", 00:18:13.629 "bdev_name": "Malloc0" 00:18:13.629 }, 00:18:13.629 { 00:18:13.629 "nbd_device": "/dev/nbd1", 00:18:13.629 "bdev_name": "Malloc1" 00:18:13.629 } 00:18:13.629 ]' 00:18:13.629 12:49:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:13.629 { 00:18:13.629 "nbd_device": "/dev/nbd0", 00:18:13.629 "bdev_name": "Malloc0" 00:18:13.629 }, 00:18:13.629 { 00:18:13.629 "nbd_device": "/dev/nbd1", 00:18:13.629 "bdev_name": "Malloc1" 00:18:13.630 } 00:18:13.630 ]' 00:18:13.630 12:49:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:13.905 12:49:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:18:13.905 /dev/nbd1' 00:18:13.905 12:49:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:18:13.905 /dev/nbd1' 00:18:13.905 12:49:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:13.905 12:49:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:18:13.905 12:49:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:18:13.905 12:49:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:18:13.905 12:49:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:18:13.905 12:49:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:18:13.905 12:49:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:13.905 12:49:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:13.905 12:49:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:13.905 12:49:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:18:13.905 12:49:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:13.905 12:49:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:18:13.905 256+0 records in 00:18:13.905 256+0 records out 00:18:13.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00837601 s, 125 MB/s 00:18:13.905 12:49:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:13.905 12:49:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:13.905 256+0 records in 00:18:13.905 256+0 records out 00:18:13.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0190999 s, 54.9 MB/s 00:18:13.905 12:49:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:13.905 12:49:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:18:13.905 256+0 records in 00:18:13.905 256+0 records out 00:18:13.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0185336 s, 56.6 MB/s 00:18:13.905 12:49:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:18:13.905 12:49:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:13.905 12:49:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:13.905 12:49:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:13.905 12:49:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:18:13.905 12:49:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:13.905 12:49:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:13.905 12:49:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:13.905 12:49:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:18:13.905 12:49:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:13.905 12:49:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:18:13.905 12:49:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:18:13.905 12:49:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:18:13.905 12:49:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:13.905 12:49:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:13.905 12:49:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:13.905 12:49:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:18:13.905 12:49:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:13.905 12:49:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:14.169 12:49:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:14.169 12:49:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:14.169 12:49:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:14.169 12:49:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:14.169 12:49:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:14.169 12:49:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:14.169 12:49:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:18:14.169 12:49:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:18:14.169 12:49:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:14.169 12:49:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:14.169 12:49:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:14.169 12:49:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:14.169 12:49:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:14.169 12:49:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:14.169 12:49:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:14.169 12:49:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:14.169 12:49:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:18:14.169 12:49:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:18:14.169 12:49:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:14.169 12:49:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:14.169 12:49:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:14.428 12:49:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:14.428 12:49:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:14.428 12:49:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:14.428 12:49:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:14.428 12:49:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:18:14.428 12:49:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:14.428 12:49:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:18:14.428 12:49:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:18:14.428 12:49:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:18:14.428 12:49:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:18:14.428 12:49:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:14.428 12:49:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:18:14.428 12:49:56 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:18:14.687 12:49:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:18:15.257 [2024-12-05 12:49:57.759501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:15.520 [2024-12-05 12:49:57.843470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.520 [2024-12-05 12:49:57.843471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.520 [2024-12-05 12:49:57.949841] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:18:15.520 [2024-12-05 12:49:57.949918] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:18:18.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:18.071 12:50:00 event.app_repeat -- event/event.sh@38 -- # waitforlisten 57375 /var/tmp/spdk-nbd.sock 00:18:18.071 12:50:00 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 57375 ']' 00:18:18.071 12:50:00 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:18.071 12:50:00 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:18.071 12:50:00 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:18.071 12:50:00 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:18.071 12:50:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:18:18.071 12:50:00 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:18.071 12:50:00 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:18:18.071 12:50:00 event.app_repeat -- event/event.sh@39 -- # killprocess 57375 00:18:18.071 12:50:00 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 57375 ']' 00:18:18.071 12:50:00 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 57375 00:18:18.071 12:50:00 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:18:18.071 12:50:00 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:18.071 12:50:00 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57375 00:18:18.071 killing process with pid 57375 00:18:18.071 12:50:00 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:18.071 12:50:00 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:18.071 12:50:00 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57375' 00:18:18.071 12:50:00 event.app_repeat -- common/autotest_common.sh@973 -- # kill 57375 00:18:18.071 12:50:00 event.app_repeat -- common/autotest_common.sh@978 -- # wait 57375 00:18:18.643 spdk_app_start is called in Round 0. 00:18:18.643 Shutdown signal received, stop current app iteration 00:18:18.643 Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 reinitialization... 00:18:18.643 spdk_app_start is called in Round 1. 00:18:18.643 Shutdown signal received, stop current app iteration 00:18:18.643 Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 reinitialization... 00:18:18.643 spdk_app_start is called in Round 2. 00:18:18.644 Shutdown signal received, stop current app iteration 00:18:18.644 Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 reinitialization... 00:18:18.644 spdk_app_start is called in Round 3. 00:18:18.644 Shutdown signal received, stop current app iteration 00:18:18.644 ************************************ 00:18:18.644 END TEST app_repeat 00:18:18.644 ************************************ 00:18:18.644 12:50:00 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:18:18.644 12:50:00 event.app_repeat -- event/event.sh@42 -- # return 0 00:18:18.644 00:18:18.644 real 0m17.497s 00:18:18.644 user 0m38.074s 00:18:18.644 sys 0m2.019s 00:18:18.644 12:50:00 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:18.644 12:50:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:18:18.644 12:50:00 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:18:18.644 12:50:00 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:18:18.644 12:50:00 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:18.644 12:50:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:18.644 12:50:00 event -- common/autotest_common.sh@10 -- # set +x 00:18:18.644 ************************************ 00:18:18.644 START TEST cpu_locks 00:18:18.644 ************************************ 00:18:18.644 12:50:00 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:18:18.644 * Looking for test storage... 00:18:18.644 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:18:18.644 12:50:01 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:18.644 12:50:01 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:18:18.644 12:50:01 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:18.644 12:50:01 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:18.644 12:50:01 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:18.644 12:50:01 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:18.644 12:50:01 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:18.644 12:50:01 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:18:18.644 12:50:01 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:18:18.644 12:50:01 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:18:18.644 12:50:01 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:18:18.644 12:50:01 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:18:18.644 12:50:01 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:18:18.644 12:50:01 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:18:18.644 12:50:01 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:18.644 12:50:01 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:18:18.644 12:50:01 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:18:18.644 12:50:01 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:18.644 12:50:01 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:18.644 12:50:01 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:18:18.644 12:50:01 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:18:18.644 12:50:01 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:18.644 12:50:01 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:18:18.644 12:50:01 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:18:18.644 12:50:01 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:18:18.644 12:50:01 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:18:18.644 12:50:01 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:18.644 12:50:01 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:18:18.644 12:50:01 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:18:18.644 12:50:01 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:18.644 12:50:01 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:18.644 12:50:01 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:18:18.644 12:50:01 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:18.644 12:50:01 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:18.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.644 --rc genhtml_branch_coverage=1 00:18:18.644 --rc genhtml_function_coverage=1 00:18:18.644 --rc genhtml_legend=1 00:18:18.644 --rc geninfo_all_blocks=1 00:18:18.644 --rc geninfo_unexecuted_blocks=1 00:18:18.644 00:18:18.644 ' 00:18:18.644 12:50:01 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:18.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.644 --rc genhtml_branch_coverage=1 00:18:18.644 --rc genhtml_function_coverage=1 00:18:18.644 --rc genhtml_legend=1 00:18:18.644 --rc geninfo_all_blocks=1 00:18:18.644 --rc geninfo_unexecuted_blocks=1 00:18:18.644 00:18:18.644 ' 00:18:18.644 12:50:01 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:18.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.644 --rc genhtml_branch_coverage=1 00:18:18.644 --rc genhtml_function_coverage=1 00:18:18.644 --rc genhtml_legend=1 00:18:18.644 --rc geninfo_all_blocks=1 00:18:18.644 --rc geninfo_unexecuted_blocks=1 00:18:18.644 00:18:18.644 ' 00:18:18.644 12:50:01 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:18.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.644 --rc genhtml_branch_coverage=1 00:18:18.644 --rc genhtml_function_coverage=1 00:18:18.644 --rc genhtml_legend=1 00:18:18.644 --rc geninfo_all_blocks=1 00:18:18.644 --rc geninfo_unexecuted_blocks=1 00:18:18.644 00:18:18.644 ' 00:18:18.644 12:50:01 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:18:18.644 12:50:01 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:18:18.644 12:50:01 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:18:18.644 12:50:01 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:18:18.644 12:50:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:18.644 12:50:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:18.644 12:50:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:18.644 ************************************ 00:18:18.644 START TEST default_locks 00:18:18.644 ************************************ 00:18:18.644 12:50:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:18:18.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.644 12:50:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=57806 00:18:18.644 12:50:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 57806 00:18:18.644 12:50:01 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 57806 ']' 00:18:18.644 12:50:01 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.644 12:50:01 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:18.644 12:50:01 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.644 12:50:01 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:18.644 12:50:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:18:18.644 12:50:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:18:18.644 [2024-12-05 12:50:01.217964] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:18:18.644 [2024-12-05 12:50:01.218097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57806 ] 00:18:18.905 [2024-12-05 12:50:01.377107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.905 [2024-12-05 12:50:01.462550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.475 12:50:02 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.475 12:50:02 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:18:19.475 12:50:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 57806 00:18:19.475 12:50:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 57806 00:18:19.475 12:50:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:18:19.735 12:50:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 57806 00:18:19.735 12:50:02 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 57806 ']' 00:18:19.735 12:50:02 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 57806 00:18:19.735 12:50:02 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:18:19.735 12:50:02 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:19.735 12:50:02 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57806 00:18:19.735 killing process with pid 57806 00:18:19.736 12:50:02 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:19.736 12:50:02 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:19.736 12:50:02 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57806' 00:18:19.736 12:50:02 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 57806 00:18:19.736 12:50:02 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 57806 00:18:21.144 12:50:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 57806 00:18:21.144 12:50:03 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:18:21.144 12:50:03 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 57806 00:18:21.144 12:50:03 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:18:21.144 12:50:03 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.144 12:50:03 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:18:21.144 12:50:03 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.144 12:50:03 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 57806 00:18:21.144 12:50:03 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 57806 ']' 00:18:21.144 12:50:03 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.144 12:50:03 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:21.144 12:50:03 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.144 12:50:03 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:21.144 12:50:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:18:21.144 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (57806) - No such process 00:18:21.144 ERROR: process (pid: 57806) is no longer running 00:18:21.144 ************************************ 00:18:21.144 END TEST default_locks 00:18:21.144 ************************************ 00:18:21.144 12:50:03 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:21.144 12:50:03 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:18:21.144 12:50:03 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:18:21.144 12:50:03 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:21.144 12:50:03 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:21.144 12:50:03 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:21.144 12:50:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:18:21.144 12:50:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:18:21.144 12:50:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:18:21.144 12:50:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:18:21.144 00:18:21.144 real 0m2.371s 00:18:21.144 user 0m2.379s 00:18:21.144 sys 0m0.434s 00:18:21.144 12:50:03 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:21.144 12:50:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:18:21.144 12:50:03 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:18:21.144 12:50:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:21.144 12:50:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:21.144 12:50:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:21.144 ************************************ 00:18:21.144 START TEST default_locks_via_rpc 00:18:21.144 ************************************ 00:18:21.144 12:50:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:18:21.144 12:50:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=57859 00:18:21.144 12:50:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 57859 00:18:21.144 12:50:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:18:21.144 12:50:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 57859 ']' 00:18:21.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.144 12:50:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.144 12:50:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:21.144 12:50:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.144 12:50:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:21.144 12:50:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.144 [2024-12-05 12:50:03.622511] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:18:21.144 [2024-12-05 12:50:03.622616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57859 ] 00:18:21.405 [2024-12-05 12:50:03.774232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.405 [2024-12-05 12:50:03.875265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.980 12:50:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:21.980 12:50:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:21.980 12:50:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:18:21.980 12:50:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.980 12:50:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.980 12:50:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.980 12:50:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:18:21.980 12:50:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:18:21.980 12:50:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:18:21.980 12:50:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:18:21.980 12:50:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:18:21.980 12:50:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.980 12:50:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.980 12:50:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.980 12:50:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 57859 00:18:21.980 12:50:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 57859 00:18:21.980 12:50:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:18:22.241 12:50:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 57859 00:18:22.241 12:50:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 57859 ']' 00:18:22.241 12:50:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 57859 00:18:22.241 12:50:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:18:22.241 12:50:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:22.241 12:50:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57859 00:18:22.241 killing process with pid 57859 00:18:22.241 12:50:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:22.241 12:50:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:22.241 12:50:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57859' 00:18:22.241 12:50:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 57859 00:18:22.241 12:50:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 57859 00:18:23.888 ************************************ 00:18:23.888 END TEST default_locks_via_rpc 00:18:23.888 ************************************ 00:18:23.888 00:18:23.888 real 0m2.704s 00:18:23.888 user 0m2.725s 00:18:23.888 sys 0m0.451s 00:18:23.888 12:50:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:23.888 12:50:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:23.888 12:50:06 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:18:23.888 12:50:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:23.888 12:50:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:23.888 12:50:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:23.889 ************************************ 00:18:23.889 START TEST non_locking_app_on_locked_coremask 00:18:23.889 ************************************ 00:18:23.889 12:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:18:23.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.889 12:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=57922 00:18:23.889 12:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 57922 /var/tmp/spdk.sock 00:18:23.889 12:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:18:23.889 12:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 57922 ']' 00:18:23.889 12:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.889 12:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:23.889 12:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.889 12:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:23.889 12:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:23.889 [2024-12-05 12:50:06.370291] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:18:23.889 [2024-12-05 12:50:06.370393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57922 ] 00:18:24.150 [2024-12-05 12:50:06.516068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.150 [2024-12-05 12:50:06.615879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:18:24.721 12:50:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:24.721 12:50:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:18:24.721 12:50:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=57938 00:18:24.721 12:50:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 57938 /var/tmp/spdk2.sock 00:18:24.721 12:50:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 57938 ']' 00:18:24.721 12:50:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:18:24.721 12:50:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:24.721 12:50:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:18:24.721 12:50:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:24.721 12:50:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:18:24.721 12:50:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:24.981 [2024-12-05 12:50:07.312207] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:18:24.981 [2024-12-05 12:50:07.312323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57938 ] 00:18:24.981 [2024-12-05 12:50:07.484431] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:18:24.981 [2024-12-05 12:50:07.484498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.241 [2024-12-05 12:50:07.683498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.621 12:50:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.621 12:50:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:18:26.621 12:50:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 57922 00:18:26.621 12:50:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:18:26.621 12:50:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 57922 00:18:26.621 12:50:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 57922 00:18:26.622 12:50:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 57922 ']' 00:18:26.622 12:50:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 57922 00:18:26.622 12:50:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:18:26.622 12:50:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:26.622 12:50:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57922 00:18:26.622 killing process with pid 57922 00:18:26.622 12:50:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:26.622 12:50:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:26.622 12:50:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57922' 00:18:26.622 12:50:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 57922 00:18:26.622 12:50:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 57922 00:18:29.917 12:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 57938 00:18:29.917 12:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 57938 ']' 00:18:29.917 12:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 57938 00:18:29.917 12:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:18:29.917 12:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:29.917 12:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57938 00:18:29.917 killing process with pid 57938 00:18:29.917 12:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:29.917 12:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:29.917 12:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57938' 00:18:29.917 12:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 57938 00:18:29.917 12:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 57938 00:18:30.855 ************************************ 00:18:30.855 END TEST non_locking_app_on_locked_coremask 00:18:30.855 ************************************ 00:18:30.855 00:18:30.855 real 0m6.857s 00:18:30.855 user 0m7.087s 00:18:30.855 sys 0m0.832s 00:18:30.855 12:50:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:30.855 12:50:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:30.855 12:50:13 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:18:30.855 12:50:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:30.855 12:50:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:30.855 12:50:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:30.855 ************************************ 00:18:30.855 START TEST locking_app_on_unlocked_coremask 00:18:30.855 ************************************ 00:18:30.855 12:50:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:18:30.855 12:50:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58035 00:18:30.855 12:50:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58035 /var/tmp/spdk.sock 00:18:30.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.855 12:50:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58035 ']' 00:18:30.855 12:50:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.855 12:50:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:30.855 12:50:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.855 12:50:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:30.855 12:50:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:30.855 12:50:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:18:30.855 [2024-12-05 12:50:13.274261] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:18:30.855 [2024-12-05 12:50:13.274724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58035 ] 00:18:31.114 [2024-12-05 12:50:13.441610] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:18:31.114 [2024-12-05 12:50:13.441667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.114 [2024-12-05 12:50:13.527524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:18:31.685 12:50:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:31.685 12:50:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:18:31.685 12:50:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:18:31.685 12:50:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58051 00:18:31.685 12:50:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58051 /var/tmp/spdk2.sock 00:18:31.685 12:50:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58051 ']' 00:18:31.685 12:50:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:18:31.685 12:50:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.685 12:50:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:18:31.685 12:50:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.685 12:50:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:31.685 [2024-12-05 12:50:14.219569] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:18:31.685 [2024-12-05 12:50:14.219851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58051 ] 00:18:31.946 [2024-12-05 12:50:14.383475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.206 [2024-12-05 12:50:14.552191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.142 12:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:33.142 12:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:18:33.142 12:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58051 00:18:33.142 12:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58051 00:18:33.142 12:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:18:33.402 12:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58035 00:18:33.402 12:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58035 ']' 00:18:33.402 12:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58035 00:18:33.402 12:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:18:33.402 12:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.402 12:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58035 00:18:33.402 killing process with pid 58035 00:18:33.402 12:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:33.402 12:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:33.402 12:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58035' 00:18:33.402 12:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58035 00:18:33.402 12:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58035 00:18:35.956 12:50:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58051 00:18:35.956 12:50:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58051 ']' 00:18:35.956 12:50:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58051 00:18:35.956 12:50:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:18:35.956 12:50:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:35.956 12:50:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58051 00:18:35.956 killing process with pid 58051 00:18:35.957 12:50:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:35.957 12:50:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:35.957 12:50:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58051' 00:18:35.957 12:50:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58051 00:18:35.957 12:50:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58051 00:18:37.400 ************************************ 00:18:37.400 END TEST locking_app_on_unlocked_coremask 00:18:37.400 ************************************ 00:18:37.400 00:18:37.400 real 0m6.350s 00:18:37.400 user 0m6.655s 00:18:37.400 sys 0m0.821s 00:18:37.400 12:50:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:37.401 12:50:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:37.401 12:50:19 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:18:37.401 12:50:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:37.401 12:50:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:37.401 12:50:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:37.401 ************************************ 00:18:37.401 START TEST locking_app_on_locked_coremask 00:18:37.401 ************************************ 00:18:37.401 12:50:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:18:37.401 12:50:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58148 00:18:37.401 12:50:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58148 /var/tmp/spdk.sock 00:18:37.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.401 12:50:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58148 ']' 00:18:37.401 12:50:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.401 12:50:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:37.401 12:50:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.401 12:50:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:18:37.401 12:50:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:37.401 12:50:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:37.401 [2024-12-05 12:50:19.667329] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:18:37.401 [2024-12-05 12:50:19.667454] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58148 ] 00:18:37.401 [2024-12-05 12:50:19.824047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.401 [2024-12-05 12:50:19.908587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.973 12:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:37.973 12:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:18:37.973 12:50:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58159 00:18:37.973 12:50:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58159 /var/tmp/spdk2.sock 00:18:37.973 12:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:18:37.973 12:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58159 /var/tmp/spdk2.sock 00:18:37.973 12:50:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:18:37.973 12:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:18:37.973 12:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:37.973 12:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:18:37.973 12:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:37.973 12:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58159 /var/tmp/spdk2.sock 00:18:37.973 12:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58159 ']' 00:18:37.973 12:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:18:37.973 12:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:37.973 12:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:18:37.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:18:37.973 12:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:37.973 12:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:37.973 [2024-12-05 12:50:20.534870] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:18:37.973 [2024-12-05 12:50:20.535431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58159 ] 00:18:38.313 [2024-12-05 12:50:20.698884] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58148 has claimed it. 00:18:38.313 [2024-12-05 12:50:20.698951] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:18:38.588 ERROR: process (pid: 58159) is no longer running 00:18:38.588 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58159) - No such process 00:18:38.588 12:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:38.588 12:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:18:38.588 12:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:18:38.588 12:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:38.588 12:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:38.588 12:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:38.588 12:50:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58148 00:18:38.588 12:50:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:18:38.588 12:50:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58148 00:18:38.848 12:50:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58148 00:18:38.848 12:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58148 ']' 00:18:38.848 12:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58148 00:18:38.848 12:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:18:38.848 12:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:38.848 12:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58148 00:18:38.848 killing process with pid 58148 00:18:38.848 12:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:38.848 12:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:38.848 12:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58148' 00:18:38.848 12:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58148 00:18:38.848 12:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58148 00:18:40.234 00:18:40.234 real 0m2.960s 00:18:40.234 user 0m3.118s 00:18:40.234 sys 0m0.532s 00:18:40.234 12:50:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:40.234 ************************************ 00:18:40.234 END TEST locking_app_on_locked_coremask 00:18:40.234 ************************************ 00:18:40.234 12:50:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:40.234 12:50:22 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:18:40.234 12:50:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:40.234 12:50:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:40.234 12:50:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:40.234 ************************************ 00:18:40.234 START TEST locking_overlapped_coremask 00:18:40.234 ************************************ 00:18:40.234 12:50:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:18:40.234 12:50:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58212 00:18:40.234 12:50:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:18:40.234 12:50:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58212 /var/tmp/spdk.sock 00:18:40.234 12:50:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58212 ']' 00:18:40.234 12:50:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.235 12:50:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:40.235 12:50:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.235 12:50:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:40.235 12:50:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:40.235 [2024-12-05 12:50:22.662942] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:18:40.235 [2024-12-05 12:50:22.663078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58212 ] 00:18:40.496 [2024-12-05 12:50:22.818184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:40.496 [2024-12-05 12:50:22.904304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:40.496 [2024-12-05 12:50:22.904406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.496 [2024-12-05 12:50:22.904429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:41.072 12:50:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.072 12:50:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:18:41.072 12:50:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58230 00:18:41.072 12:50:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58230 /var/tmp/spdk2.sock 00:18:41.072 12:50:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:18:41.072 12:50:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:18:41.072 12:50:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58230 /var/tmp/spdk2.sock 00:18:41.072 12:50:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:18:41.072 12:50:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:41.072 12:50:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:18:41.072 12:50:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:41.072 12:50:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58230 /var/tmp/spdk2.sock 00:18:41.072 12:50:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58230 ']' 00:18:41.072 12:50:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:18:41.072 12:50:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:41.072 12:50:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:18:41.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:18:41.072 12:50:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:41.072 12:50:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:41.072 [2024-12-05 12:50:23.559518] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:18:41.072 [2024-12-05 12:50:23.559756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58230 ] 00:18:41.334 [2024-12-05 12:50:23.726542] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58212 has claimed it. 00:18:41.334 [2024-12-05 12:50:23.726612] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:18:41.905 ERROR: process (pid: 58230) is no longer running 00:18:41.905 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58230) - No such process 00:18:41.905 12:50:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.905 12:50:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:18:41.905 12:50:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:18:41.905 12:50:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:41.905 12:50:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:41.905 12:50:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:41.905 12:50:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:18:41.905 12:50:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:18:41.905 12:50:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:18:41.905 12:50:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:18:41.905 12:50:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58212 00:18:41.905 12:50:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 58212 ']' 00:18:41.905 12:50:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 58212 00:18:41.905 12:50:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:18:41.905 12:50:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.905 12:50:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58212 00:18:41.905 12:50:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:41.905 12:50:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:41.905 12:50:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58212' 00:18:41.905 killing process with pid 58212 00:18:41.905 12:50:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 58212 00:18:41.905 12:50:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 58212 00:18:43.295 00:18:43.295 real 0m2.881s 00:18:43.295 user 0m7.853s 00:18:43.295 sys 0m0.402s 00:18:43.295 12:50:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:43.295 12:50:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:43.295 ************************************ 00:18:43.295 END TEST locking_overlapped_coremask 00:18:43.295 ************************************ 00:18:43.295 12:50:25 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:18:43.295 12:50:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:43.295 12:50:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:43.295 12:50:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:43.295 ************************************ 00:18:43.295 START TEST locking_overlapped_coremask_via_rpc 00:18:43.295 ************************************ 00:18:43.295 12:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:18:43.295 12:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58283 00:18:43.295 12:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 58283 /var/tmp/spdk.sock 00:18:43.295 12:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:18:43.295 12:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58283 ']' 00:18:43.295 12:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.295 12:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:43.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.295 12:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.295 12:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:43.295 12:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:43.295 [2024-12-05 12:50:25.591283] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:18:43.295 [2024-12-05 12:50:25.591428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58283 ] 00:18:43.295 [2024-12-05 12:50:25.747731] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:18:43.295 [2024-12-05 12:50:25.747784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:43.295 [2024-12-05 12:50:25.836657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:43.295 [2024-12-05 12:50:25.836972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:43.295 [2024-12-05 12:50:25.837010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:18:43.868 12:50:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:43.868 12:50:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:43.868 12:50:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58301 00:18:43.868 12:50:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 58301 /var/tmp/spdk2.sock 00:18:43.868 12:50:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:18:43.868 12:50:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58301 ']' 00:18:43.868 12:50:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:18:43.868 12:50:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:43.868 12:50:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:18:43.868 12:50:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:43.868 12:50:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:44.129 [2024-12-05 12:50:26.518038] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:18:44.129 [2024-12-05 12:50:26.518158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58301 ] 00:18:44.129 [2024-12-05 12:50:26.691818] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:18:44.129 [2024-12-05 12:50:26.691883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:44.389 [2024-12-05 12:50:26.898961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:44.389 [2024-12-05 12:50:26.902595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:44.389 [2024-12-05 12:50:26.902616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:45.775 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.775 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:45.775 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:18:45.775 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.775 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:45.775 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.775 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:18:45.775 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:18:45.775 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:18:45.775 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:45.775 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:45.775 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:45.775 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:45.775 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:18:45.775 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.775 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:45.775 [2024-12-05 12:50:28.081684] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58283 has claimed it. 00:18:45.775 request: 00:18:45.775 { 00:18:45.775 "method": "framework_enable_cpumask_locks", 00:18:45.775 "req_id": 1 00:18:45.775 } 00:18:45.775 Got JSON-RPC error response 00:18:45.775 response: 00:18:45.775 { 00:18:45.775 "code": -32603, 00:18:45.775 "message": "Failed to claim CPU core: 2" 00:18:45.775 } 00:18:45.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.775 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:45.775 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:18:45.775 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:45.775 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:45.775 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:45.775 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 58283 /var/tmp/spdk.sock 00:18:45.775 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58283 ']' 00:18:45.775 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.775 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.775 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.775 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.775 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:45.775 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.775 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:45.775 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 58301 /var/tmp/spdk2.sock 00:18:45.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:18:45.775 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58301 ']' 00:18:45.775 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:18:45.775 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.775 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:18:45.775 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.775 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:46.036 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:46.036 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:46.036 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:18:46.036 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:18:46.036 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:18:46.036 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:18:46.036 00:18:46.036 real 0m3.024s 00:18:46.036 user 0m1.107s 00:18:46.036 sys 0m0.144s 00:18:46.036 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:46.036 12:50:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:46.036 ************************************ 00:18:46.036 END TEST locking_overlapped_coremask_via_rpc 00:18:46.036 ************************************ 00:18:46.036 12:50:28 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:18:46.036 12:50:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58283 ]] 00:18:46.036 12:50:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58283 00:18:46.036 12:50:28 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58283 ']' 00:18:46.036 12:50:28 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58283 00:18:46.036 12:50:28 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:18:46.036 12:50:28 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:46.036 12:50:28 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58283 00:18:46.036 killing process with pid 58283 00:18:46.036 12:50:28 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:46.036 12:50:28 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:46.036 12:50:28 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58283' 00:18:46.036 12:50:28 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 58283 00:18:46.036 12:50:28 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 58283 00:18:47.425 12:50:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58301 ]] 00:18:47.425 12:50:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58301 00:18:47.425 12:50:29 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58301 ']' 00:18:47.425 12:50:29 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58301 00:18:47.425 12:50:29 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:18:47.425 12:50:29 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:47.425 12:50:29 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58301 00:18:47.425 12:50:29 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:47.425 12:50:29 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:47.425 killing process with pid 58301 00:18:47.425 12:50:29 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58301' 00:18:47.425 12:50:29 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 58301 00:18:47.425 12:50:29 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 58301 00:18:48.812 12:50:31 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:18:48.812 12:50:31 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:18:48.812 12:50:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58283 ]] 00:18:48.813 12:50:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58283 00:18:48.813 12:50:31 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58283 ']' 00:18:48.813 12:50:31 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58283 00:18:48.813 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (58283) - No such process 00:18:48.813 Process with pid 58283 is not found 00:18:48.813 Process with pid 58301 is not found 00:18:48.813 12:50:31 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 58283 is not found' 00:18:48.813 12:50:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58301 ]] 00:18:48.813 12:50:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58301 00:18:48.813 12:50:31 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58301 ']' 00:18:48.813 12:50:31 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58301 00:18:48.813 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (58301) - No such process 00:18:48.813 12:50:31 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 58301 is not found' 00:18:48.813 12:50:31 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:18:48.813 00:18:48.813 real 0m30.143s 00:18:48.813 user 0m52.026s 00:18:48.813 sys 0m4.402s 00:18:48.813 ************************************ 00:18:48.813 END TEST cpu_locks 00:18:48.813 ************************************ 00:18:48.813 12:50:31 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:48.813 12:50:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:48.813 00:18:48.813 real 0m57.419s 00:18:48.813 user 1m45.830s 00:18:48.813 sys 0m7.175s 00:18:48.813 12:50:31 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:48.813 ************************************ 00:18:48.813 END TEST event 00:18:48.813 ************************************ 00:18:48.813 12:50:31 event -- common/autotest_common.sh@10 -- # set +x 00:18:48.813 12:50:31 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:18:48.813 12:50:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:48.813 12:50:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:48.813 12:50:31 -- common/autotest_common.sh@10 -- # set +x 00:18:48.813 ************************************ 00:18:48.813 START TEST thread 00:18:48.813 ************************************ 00:18:48.813 12:50:31 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:18:48.813 * Looking for test storage... 00:18:48.813 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:18:48.813 12:50:31 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:48.813 12:50:31 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:18:48.813 12:50:31 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:48.813 12:50:31 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:48.813 12:50:31 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:48.813 12:50:31 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:48.813 12:50:31 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:48.813 12:50:31 thread -- scripts/common.sh@336 -- # IFS=.-: 00:18:48.813 12:50:31 thread -- scripts/common.sh@336 -- # read -ra ver1 00:18:48.813 12:50:31 thread -- scripts/common.sh@337 -- # IFS=.-: 00:18:48.813 12:50:31 thread -- scripts/common.sh@337 -- # read -ra ver2 00:18:48.813 12:50:31 thread -- scripts/common.sh@338 -- # local 'op=<' 00:18:48.813 12:50:31 thread -- scripts/common.sh@340 -- # ver1_l=2 00:18:48.813 12:50:31 thread -- scripts/common.sh@341 -- # ver2_l=1 00:18:48.813 12:50:31 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:48.813 12:50:31 thread -- scripts/common.sh@344 -- # case "$op" in 00:18:48.813 12:50:31 thread -- scripts/common.sh@345 -- # : 1 00:18:48.813 12:50:31 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:48.813 12:50:31 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:48.813 12:50:31 thread -- scripts/common.sh@365 -- # decimal 1 00:18:48.813 12:50:31 thread -- scripts/common.sh@353 -- # local d=1 00:18:48.813 12:50:31 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:48.813 12:50:31 thread -- scripts/common.sh@355 -- # echo 1 00:18:48.813 12:50:31 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:18:48.813 12:50:31 thread -- scripts/common.sh@366 -- # decimal 2 00:18:48.813 12:50:31 thread -- scripts/common.sh@353 -- # local d=2 00:18:48.813 12:50:31 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:48.813 12:50:31 thread -- scripts/common.sh@355 -- # echo 2 00:18:48.813 12:50:31 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:18:48.813 12:50:31 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:48.813 12:50:31 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:48.813 12:50:31 thread -- scripts/common.sh@368 -- # return 0 00:18:48.813 12:50:31 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:48.813 12:50:31 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:48.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.813 --rc genhtml_branch_coverage=1 00:18:48.813 --rc genhtml_function_coverage=1 00:18:48.813 --rc genhtml_legend=1 00:18:48.813 --rc geninfo_all_blocks=1 00:18:48.813 --rc geninfo_unexecuted_blocks=1 00:18:48.813 00:18:48.813 ' 00:18:48.813 12:50:31 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:48.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.813 --rc genhtml_branch_coverage=1 00:18:48.813 --rc genhtml_function_coverage=1 00:18:48.813 --rc genhtml_legend=1 00:18:48.813 --rc geninfo_all_blocks=1 00:18:48.813 --rc geninfo_unexecuted_blocks=1 00:18:48.813 00:18:48.813 ' 00:18:48.813 12:50:31 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:48.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.813 --rc genhtml_branch_coverage=1 00:18:48.813 --rc genhtml_function_coverage=1 00:18:48.813 --rc genhtml_legend=1 00:18:48.813 --rc geninfo_all_blocks=1 00:18:48.813 --rc geninfo_unexecuted_blocks=1 00:18:48.813 00:18:48.813 ' 00:18:48.813 12:50:31 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:48.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.813 --rc genhtml_branch_coverage=1 00:18:48.813 --rc genhtml_function_coverage=1 00:18:48.813 --rc genhtml_legend=1 00:18:48.813 --rc geninfo_all_blocks=1 00:18:48.813 --rc geninfo_unexecuted_blocks=1 00:18:48.813 00:18:48.813 ' 00:18:48.813 12:50:31 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:18:48.813 12:50:31 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:18:48.813 12:50:31 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:48.813 12:50:31 thread -- common/autotest_common.sh@10 -- # set +x 00:18:48.813 ************************************ 00:18:48.813 START TEST thread_poller_perf 00:18:48.813 ************************************ 00:18:48.813 12:50:31 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:18:49.073 [2024-12-05 12:50:31.402362] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:18:49.073 [2024-12-05 12:50:31.402632] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58455 ] 00:18:49.073 [2024-12-05 12:50:31.554612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.334 [2024-12-05 12:50:31.658553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.334 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:18:50.281 [2024-12-05T12:50:32.868Z] ====================================== 00:18:50.281 [2024-12-05T12:50:32.868Z] busy:2610836838 (cyc) 00:18:50.281 [2024-12-05T12:50:32.868Z] total_run_count: 294000 00:18:50.281 [2024-12-05T12:50:32.868Z] tsc_hz: 2600000000 (cyc) 00:18:50.281 [2024-12-05T12:50:32.868Z] ====================================== 00:18:50.281 [2024-12-05T12:50:32.868Z] poller_cost: 8880 (cyc), 3415 (nsec) 00:18:50.281 ************************************ 00:18:50.281 END TEST thread_poller_perf 00:18:50.281 ************************************ 00:18:50.281 00:18:50.281 real 0m1.451s 00:18:50.281 user 0m1.285s 00:18:50.281 sys 0m0.057s 00:18:50.281 12:50:32 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:50.281 12:50:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:18:50.281 12:50:32 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:18:50.281 12:50:32 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:18:50.281 12:50:32 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:50.281 12:50:32 thread -- common/autotest_common.sh@10 -- # set +x 00:18:50.541 ************************************ 00:18:50.541 START TEST thread_poller_perf 00:18:50.541 ************************************ 00:18:50.541 12:50:32 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:18:50.541 [2024-12-05 12:50:32.897169] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:18:50.541 [2024-12-05 12:50:32.897452] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58492 ] 00:18:50.541 [2024-12-05 12:50:33.054711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.801 [2024-12-05 12:50:33.157072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.801 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:18:51.742 [2024-12-05T12:50:34.329Z] ====================================== 00:18:51.742 [2024-12-05T12:50:34.329Z] busy:2603655956 (cyc) 00:18:51.742 [2024-12-05T12:50:34.329Z] total_run_count: 3934000 00:18:51.742 [2024-12-05T12:50:34.329Z] tsc_hz: 2600000000 (cyc) 00:18:51.742 [2024-12-05T12:50:34.329Z] ====================================== 00:18:51.742 [2024-12-05T12:50:34.329Z] poller_cost: 661 (cyc), 254 (nsec) 00:18:51.742 00:18:51.742 real 0m1.456s 00:18:51.742 user 0m1.276s 00:18:51.742 sys 0m0.072s 00:18:51.742 12:50:34 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:51.742 ************************************ 00:18:51.742 12:50:34 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:18:51.742 END TEST thread_poller_perf 00:18:51.742 ************************************ 00:18:52.003 12:50:34 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:18:52.003 ************************************ 00:18:52.003 END TEST thread 00:18:52.003 ************************************ 00:18:52.003 00:18:52.003 real 0m3.122s 00:18:52.003 user 0m2.672s 00:18:52.003 sys 0m0.234s 00:18:52.003 12:50:34 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:52.003 12:50:34 thread -- common/autotest_common.sh@10 -- # set +x 00:18:52.003 12:50:34 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:18:52.003 12:50:34 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:18:52.003 12:50:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:52.003 12:50:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:52.003 12:50:34 -- common/autotest_common.sh@10 -- # set +x 00:18:52.003 ************************************ 00:18:52.003 START TEST app_cmdline 00:18:52.003 ************************************ 00:18:52.003 12:50:34 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:18:52.003 * Looking for test storage... 00:18:52.003 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:18:52.004 12:50:34 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:52.004 12:50:34 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:52.004 12:50:34 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:18:52.004 12:50:34 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:52.004 12:50:34 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:52.004 12:50:34 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:52.004 12:50:34 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:52.004 12:50:34 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:18:52.004 12:50:34 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:18:52.004 12:50:34 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:18:52.004 12:50:34 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:18:52.004 12:50:34 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:18:52.004 12:50:34 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:18:52.004 12:50:34 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:18:52.004 12:50:34 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:52.004 12:50:34 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:18:52.004 12:50:34 app_cmdline -- scripts/common.sh@345 -- # : 1 00:18:52.004 12:50:34 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:52.004 12:50:34 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:52.004 12:50:34 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:18:52.004 12:50:34 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:18:52.004 12:50:34 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:52.004 12:50:34 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:18:52.004 12:50:34 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:18:52.004 12:50:34 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:18:52.004 12:50:34 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:18:52.004 12:50:34 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:52.004 12:50:34 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:18:52.004 12:50:34 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:18:52.004 12:50:34 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:52.004 12:50:34 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:52.004 12:50:34 app_cmdline -- scripts/common.sh@368 -- # return 0 00:18:52.004 12:50:34 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:52.004 12:50:34 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:52.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.004 --rc genhtml_branch_coverage=1 00:18:52.004 --rc genhtml_function_coverage=1 00:18:52.004 --rc genhtml_legend=1 00:18:52.004 --rc geninfo_all_blocks=1 00:18:52.004 --rc geninfo_unexecuted_blocks=1 00:18:52.004 00:18:52.004 ' 00:18:52.004 12:50:34 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:52.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.004 --rc genhtml_branch_coverage=1 00:18:52.004 --rc genhtml_function_coverage=1 00:18:52.004 --rc genhtml_legend=1 00:18:52.004 --rc geninfo_all_blocks=1 00:18:52.004 --rc geninfo_unexecuted_blocks=1 00:18:52.004 00:18:52.004 ' 00:18:52.004 12:50:34 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:52.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.004 --rc genhtml_branch_coverage=1 00:18:52.004 --rc genhtml_function_coverage=1 00:18:52.004 --rc genhtml_legend=1 00:18:52.004 --rc geninfo_all_blocks=1 00:18:52.004 --rc geninfo_unexecuted_blocks=1 00:18:52.004 00:18:52.004 ' 00:18:52.004 12:50:34 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:52.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.004 --rc genhtml_branch_coverage=1 00:18:52.004 --rc genhtml_function_coverage=1 00:18:52.004 --rc genhtml_legend=1 00:18:52.004 --rc geninfo_all_blocks=1 00:18:52.004 --rc geninfo_unexecuted_blocks=1 00:18:52.004 00:18:52.004 ' 00:18:52.004 12:50:34 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:18:52.004 12:50:34 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=58581 00:18:52.004 12:50:34 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 58581 00:18:52.004 12:50:34 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 58581 ']' 00:18:52.004 12:50:34 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:18:52.004 12:50:34 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.004 12:50:34 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:52.004 12:50:34 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.004 12:50:34 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:52.004 12:50:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:18:52.265 [2024-12-05 12:50:34.605839] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:18:52.265 [2024-12-05 12:50:34.605953] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58581 ] 00:18:52.265 [2024-12-05 12:50:34.761675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.265 [2024-12-05 12:50:34.847305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.208 12:50:35 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:53.208 12:50:35 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:18:53.208 12:50:35 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:18:53.208 { 00:18:53.208 "version": "SPDK v25.01-pre git sha1 2cae84b3c", 00:18:53.208 "fields": { 00:18:53.208 "major": 25, 00:18:53.208 "minor": 1, 00:18:53.208 "patch": 0, 00:18:53.208 "suffix": "-pre", 00:18:53.208 "commit": "2cae84b3c" 00:18:53.208 } 00:18:53.208 } 00:18:53.208 12:50:35 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:18:53.208 12:50:35 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:18:53.208 12:50:35 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:18:53.208 12:50:35 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:18:53.208 12:50:35 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:18:53.208 12:50:35 app_cmdline -- app/cmdline.sh@26 -- # sort 00:18:53.208 12:50:35 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:18:53.208 12:50:35 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.208 12:50:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:18:53.208 12:50:35 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.208 12:50:35 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:18:53.208 12:50:35 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:18:53.208 12:50:35 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:18:53.208 12:50:35 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:18:53.208 12:50:35 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:18:53.208 12:50:35 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:53.208 12:50:35 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.208 12:50:35 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:53.208 12:50:35 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.208 12:50:35 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:53.208 12:50:35 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.208 12:50:35 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:53.208 12:50:35 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:53.208 12:50:35 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:18:53.470 request: 00:18:53.470 { 00:18:53.470 "method": "env_dpdk_get_mem_stats", 00:18:53.470 "req_id": 1 00:18:53.470 } 00:18:53.470 Got JSON-RPC error response 00:18:53.470 response: 00:18:53.470 { 00:18:53.470 "code": -32601, 00:18:53.470 "message": "Method not found" 00:18:53.470 } 00:18:53.470 12:50:35 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:18:53.471 12:50:35 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:53.471 12:50:35 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:53.471 12:50:35 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:53.471 12:50:35 app_cmdline -- app/cmdline.sh@1 -- # killprocess 58581 00:18:53.471 12:50:35 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 58581 ']' 00:18:53.471 12:50:35 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 58581 00:18:53.471 12:50:35 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:18:53.471 12:50:35 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:53.471 12:50:35 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58581 00:18:53.471 killing process with pid 58581 00:18:53.471 12:50:35 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:53.471 12:50:35 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:53.471 12:50:35 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58581' 00:18:53.471 12:50:35 app_cmdline -- common/autotest_common.sh@973 -- # kill 58581 00:18:53.471 12:50:35 app_cmdline -- common/autotest_common.sh@978 -- # wait 58581 00:18:54.857 ************************************ 00:18:54.857 END TEST app_cmdline 00:18:54.857 ************************************ 00:18:54.857 00:18:54.857 real 0m2.754s 00:18:54.857 user 0m3.070s 00:18:54.857 sys 0m0.436s 00:18:54.857 12:50:37 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:54.857 12:50:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:18:54.857 12:50:37 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:18:54.857 12:50:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:54.857 12:50:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:54.857 12:50:37 -- common/autotest_common.sh@10 -- # set +x 00:18:54.857 ************************************ 00:18:54.857 START TEST version 00:18:54.857 ************************************ 00:18:54.857 12:50:37 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:18:54.857 * Looking for test storage... 00:18:54.857 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:18:54.857 12:50:37 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:54.857 12:50:37 version -- common/autotest_common.sh@1711 -- # lcov --version 00:18:54.857 12:50:37 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:54.857 12:50:37 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:54.857 12:50:37 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:54.857 12:50:37 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:54.857 12:50:37 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:54.857 12:50:37 version -- scripts/common.sh@336 -- # IFS=.-: 00:18:54.857 12:50:37 version -- scripts/common.sh@336 -- # read -ra ver1 00:18:54.857 12:50:37 version -- scripts/common.sh@337 -- # IFS=.-: 00:18:54.857 12:50:37 version -- scripts/common.sh@337 -- # read -ra ver2 00:18:54.857 12:50:37 version -- scripts/common.sh@338 -- # local 'op=<' 00:18:54.857 12:50:37 version -- scripts/common.sh@340 -- # ver1_l=2 00:18:54.857 12:50:37 version -- scripts/common.sh@341 -- # ver2_l=1 00:18:54.857 12:50:37 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:54.857 12:50:37 version -- scripts/common.sh@344 -- # case "$op" in 00:18:54.857 12:50:37 version -- scripts/common.sh@345 -- # : 1 00:18:54.857 12:50:37 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:54.857 12:50:37 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:54.857 12:50:37 version -- scripts/common.sh@365 -- # decimal 1 00:18:54.857 12:50:37 version -- scripts/common.sh@353 -- # local d=1 00:18:54.857 12:50:37 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:54.857 12:50:37 version -- scripts/common.sh@355 -- # echo 1 00:18:54.857 12:50:37 version -- scripts/common.sh@365 -- # ver1[v]=1 00:18:54.857 12:50:37 version -- scripts/common.sh@366 -- # decimal 2 00:18:54.857 12:50:37 version -- scripts/common.sh@353 -- # local d=2 00:18:54.857 12:50:37 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:54.857 12:50:37 version -- scripts/common.sh@355 -- # echo 2 00:18:54.857 12:50:37 version -- scripts/common.sh@366 -- # ver2[v]=2 00:18:54.857 12:50:37 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:54.857 12:50:37 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:54.857 12:50:37 version -- scripts/common.sh@368 -- # return 0 00:18:54.857 12:50:37 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:54.857 12:50:37 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:54.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.857 --rc genhtml_branch_coverage=1 00:18:54.857 --rc genhtml_function_coverage=1 00:18:54.857 --rc genhtml_legend=1 00:18:54.857 --rc geninfo_all_blocks=1 00:18:54.857 --rc geninfo_unexecuted_blocks=1 00:18:54.857 00:18:54.857 ' 00:18:54.857 12:50:37 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:54.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.857 --rc genhtml_branch_coverage=1 00:18:54.857 --rc genhtml_function_coverage=1 00:18:54.857 --rc genhtml_legend=1 00:18:54.857 --rc geninfo_all_blocks=1 00:18:54.857 --rc geninfo_unexecuted_blocks=1 00:18:54.857 00:18:54.857 ' 00:18:54.857 12:50:37 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:54.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.857 --rc genhtml_branch_coverage=1 00:18:54.857 --rc genhtml_function_coverage=1 00:18:54.857 --rc genhtml_legend=1 00:18:54.857 --rc geninfo_all_blocks=1 00:18:54.857 --rc geninfo_unexecuted_blocks=1 00:18:54.857 00:18:54.857 ' 00:18:54.857 12:50:37 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:54.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.857 --rc genhtml_branch_coverage=1 00:18:54.857 --rc genhtml_function_coverage=1 00:18:54.857 --rc genhtml_legend=1 00:18:54.857 --rc geninfo_all_blocks=1 00:18:54.857 --rc geninfo_unexecuted_blocks=1 00:18:54.857 00:18:54.857 ' 00:18:54.857 12:50:37 version -- app/version.sh@17 -- # get_header_version major 00:18:54.857 12:50:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:18:54.857 12:50:37 version -- app/version.sh@14 -- # cut -f2 00:18:54.857 12:50:37 version -- app/version.sh@14 -- # tr -d '"' 00:18:54.857 12:50:37 version -- app/version.sh@17 -- # major=25 00:18:54.857 12:50:37 version -- app/version.sh@18 -- # get_header_version minor 00:18:54.858 12:50:37 version -- app/version.sh@14 -- # cut -f2 00:18:54.858 12:50:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:18:54.858 12:50:37 version -- app/version.sh@14 -- # tr -d '"' 00:18:54.858 12:50:37 version -- app/version.sh@18 -- # minor=1 00:18:54.858 12:50:37 version -- app/version.sh@19 -- # get_header_version patch 00:18:54.858 12:50:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:18:54.858 12:50:37 version -- app/version.sh@14 -- # tr -d '"' 00:18:54.858 12:50:37 version -- app/version.sh@14 -- # cut -f2 00:18:54.858 12:50:37 version -- app/version.sh@19 -- # patch=0 00:18:54.858 12:50:37 version -- app/version.sh@20 -- # get_header_version suffix 00:18:54.858 12:50:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:18:54.858 12:50:37 version -- app/version.sh@14 -- # cut -f2 00:18:54.858 12:50:37 version -- app/version.sh@14 -- # tr -d '"' 00:18:54.858 12:50:37 version -- app/version.sh@20 -- # suffix=-pre 00:18:54.858 12:50:37 version -- app/version.sh@22 -- # version=25.1 00:18:54.858 12:50:37 version -- app/version.sh@25 -- # (( patch != 0 )) 00:18:54.858 12:50:37 version -- app/version.sh@28 -- # version=25.1rc0 00:18:54.858 12:50:37 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:18:54.858 12:50:37 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:18:54.858 12:50:37 version -- app/version.sh@30 -- # py_version=25.1rc0 00:18:54.858 12:50:37 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:18:54.858 ************************************ 00:18:54.858 END TEST version 00:18:54.858 ************************************ 00:18:54.858 00:18:54.858 real 0m0.186s 00:18:54.858 user 0m0.114s 00:18:54.858 sys 0m0.101s 00:18:54.858 12:50:37 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:54.858 12:50:37 version -- common/autotest_common.sh@10 -- # set +x 00:18:54.858 12:50:37 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:18:54.858 12:50:37 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:18:54.858 12:50:37 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:18:54.858 12:50:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:54.858 12:50:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:54.858 12:50:37 -- common/autotest_common.sh@10 -- # set +x 00:18:54.858 ************************************ 00:18:54.858 START TEST bdev_raid 00:18:54.858 ************************************ 00:18:54.858 12:50:37 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:18:55.119 * Looking for test storage... 00:18:55.119 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:55.119 12:50:37 bdev_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:55.119 12:50:37 bdev_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:18:55.119 12:50:37 bdev_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:55.119 12:50:37 bdev_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:55.119 12:50:37 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:55.119 12:50:37 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:55.119 12:50:37 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:55.119 12:50:37 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:18:55.119 12:50:37 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:18:55.119 12:50:37 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:18:55.119 12:50:37 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:18:55.119 12:50:37 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:18:55.119 12:50:37 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:18:55.119 12:50:37 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:18:55.119 12:50:37 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:55.119 12:50:37 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:18:55.119 12:50:37 bdev_raid -- scripts/common.sh@345 -- # : 1 00:18:55.119 12:50:37 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:55.119 12:50:37 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:55.119 12:50:37 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:18:55.119 12:50:37 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:18:55.119 12:50:37 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:55.119 12:50:37 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:18:55.119 12:50:37 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:55.119 12:50:37 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:18:55.119 12:50:37 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:18:55.119 12:50:37 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:55.119 12:50:37 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:18:55.119 12:50:37 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:55.119 12:50:37 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:55.119 12:50:37 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:55.119 12:50:37 bdev_raid -- scripts/common.sh@368 -- # return 0 00:18:55.119 12:50:37 bdev_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:55.119 12:50:37 bdev_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:55.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.119 --rc genhtml_branch_coverage=1 00:18:55.119 --rc genhtml_function_coverage=1 00:18:55.119 --rc genhtml_legend=1 00:18:55.119 --rc geninfo_all_blocks=1 00:18:55.119 --rc geninfo_unexecuted_blocks=1 00:18:55.119 00:18:55.119 ' 00:18:55.119 12:50:37 bdev_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:55.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.119 --rc genhtml_branch_coverage=1 00:18:55.119 --rc genhtml_function_coverage=1 00:18:55.119 --rc genhtml_legend=1 00:18:55.119 --rc geninfo_all_blocks=1 00:18:55.119 --rc geninfo_unexecuted_blocks=1 00:18:55.119 00:18:55.119 ' 00:18:55.119 12:50:37 bdev_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:55.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.119 --rc genhtml_branch_coverage=1 00:18:55.119 --rc genhtml_function_coverage=1 00:18:55.119 --rc genhtml_legend=1 00:18:55.119 --rc geninfo_all_blocks=1 00:18:55.119 --rc geninfo_unexecuted_blocks=1 00:18:55.119 00:18:55.119 ' 00:18:55.119 12:50:37 bdev_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:55.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.119 --rc genhtml_branch_coverage=1 00:18:55.119 --rc genhtml_function_coverage=1 00:18:55.119 --rc genhtml_legend=1 00:18:55.119 --rc geninfo_all_blocks=1 00:18:55.119 --rc geninfo_unexecuted_blocks=1 00:18:55.119 00:18:55.119 ' 00:18:55.119 12:50:37 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:55.119 12:50:37 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:18:55.119 12:50:37 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:18:55.119 12:50:37 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:18:55.119 12:50:37 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:18:55.119 12:50:37 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:18:55.119 12:50:37 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:18:55.119 12:50:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:55.119 12:50:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:55.119 12:50:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:55.119 ************************************ 00:18:55.119 START TEST raid1_resize_data_offset_test 00:18:55.119 ************************************ 00:18:55.119 12:50:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:18:55.119 12:50:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=58752 00:18:55.119 12:50:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 58752' 00:18:55.119 12:50:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:55.119 Process raid pid: 58752 00:18:55.119 12:50:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 58752 00:18:55.119 12:50:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 58752 ']' 00:18:55.119 12:50:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.119 12:50:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:55.119 12:50:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.119 12:50:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:55.119 12:50:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.119 [2024-12-05 12:50:37.638982] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:18:55.119 [2024-12-05 12:50:37.639265] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.380 [2024-12-05 12:50:37.800729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.380 [2024-12-05 12:50:37.956337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.640 [2024-12-05 12:50:38.106326] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:55.640 [2024-12-05 12:50:38.106523] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:56.209 12:50:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:56.209 12:50:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:18:56.209 12:50:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:18:56.209 12:50:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.209 12:50:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.209 malloc0 00:18:56.209 12:50:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.209 12:50:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:18:56.209 12:50:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.209 12:50:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.209 malloc1 00:18:56.209 12:50:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.209 12:50:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:18:56.209 12:50:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.209 12:50:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.209 null0 00:18:56.209 12:50:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.209 12:50:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:18:56.209 12:50:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.209 12:50:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.209 [2024-12-05 12:50:38.635454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:18:56.209 [2024-12-05 12:50:38.637592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:56.209 [2024-12-05 12:50:38.637665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:18:56.209 [2024-12-05 12:50:38.637858] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:56.209 [2024-12-05 12:50:38.637879] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:18:56.209 [2024-12-05 12:50:38.638255] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:18:56.209 [2024-12-05 12:50:38.638475] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:56.209 [2024-12-05 12:50:38.638519] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:18:56.209 [2024-12-05 12:50:38.638742] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:56.209 12:50:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.209 12:50:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.209 12:50:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.209 12:50:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:18:56.209 12:50:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.209 12:50:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.209 12:50:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:18:56.209 12:50:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:18:56.210 12:50:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.210 12:50:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.210 [2024-12-05 12:50:38.675429] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:18:56.210 12:50:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.210 12:50:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:18:56.210 12:50:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.210 12:50:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.469 malloc2 00:18:56.469 12:50:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.469 12:50:39 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:18:56.469 12:50:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.469 12:50:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.469 [2024-12-05 12:50:39.049073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:56.730 [2024-12-05 12:50:39.060893] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:56.730 12:50:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.730 [2024-12-05 12:50:39.062786] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:18:56.730 12:50:39 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.730 12:50:39 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:18:56.730 12:50:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.730 12:50:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.730 12:50:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.730 12:50:39 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:18:56.730 12:50:39 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 58752 00:18:56.730 12:50:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 58752 ']' 00:18:56.730 12:50:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 58752 00:18:56.730 12:50:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:18:56.730 12:50:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:56.730 12:50:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58752 00:18:56.730 killing process with pid 58752 00:18:56.730 12:50:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:56.730 12:50:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:56.730 12:50:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58752' 00:18:56.730 12:50:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 58752 00:18:56.730 [2024-12-05 12:50:39.117537] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:56.730 12:50:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 58752 00:18:56.730 [2024-12-05 12:50:39.119924] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:18:56.730 [2024-12-05 12:50:39.119978] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:56.730 [2024-12-05 12:50:39.119994] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:18:56.730 [2024-12-05 12:50:39.142838] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:56.730 [2024-12-05 12:50:39.143155] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:56.730 [2024-12-05 12:50:39.143171] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:18:58.116 [2024-12-05 12:50:40.267125] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:58.685 ************************************ 00:18:58.685 END TEST raid1_resize_data_offset_test 00:18:58.685 ************************************ 00:18:58.686 12:50:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:18:58.686 00:18:58.686 real 0m3.426s 00:18:58.686 user 0m3.388s 00:18:58.686 sys 0m0.403s 00:18:58.686 12:50:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:58.686 12:50:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.686 12:50:41 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:18:58.686 12:50:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:58.686 12:50:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:58.686 12:50:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:58.686 ************************************ 00:18:58.686 START TEST raid0_resize_superblock_test 00:18:58.686 ************************************ 00:18:58.686 12:50:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:18:58.686 12:50:41 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:18:58.686 12:50:41 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=58819 00:18:58.686 12:50:41 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 58819' 00:18:58.686 12:50:41 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:58.686 Process raid pid: 58819 00:18:58.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.686 12:50:41 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 58819 00:18:58.686 12:50:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 58819 ']' 00:18:58.686 12:50:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.686 12:50:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:58.686 12:50:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.686 12:50:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:58.686 12:50:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.686 [2024-12-05 12:50:41.115600] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:18:58.686 [2024-12-05 12:50:41.115971] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:58.945 [2024-12-05 12:50:41.279193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.945 [2024-12-05 12:50:41.382545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.945 [2024-12-05 12:50:41.521640] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:58.945 [2024-12-05 12:50:41.521678] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:59.521 12:50:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:59.521 12:50:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:18:59.522 12:50:41 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:18:59.522 12:50:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.522 12:50:41 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.781 malloc0 00:18:59.781 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.781 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:18:59.781 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.781 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.781 [2024-12-05 12:50:42.273079] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:18:59.781 [2024-12-05 12:50:42.273254] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.781 [2024-12-05 12:50:42.273282] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:59.781 [2024-12-05 12:50:42.273294] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.781 [2024-12-05 12:50:42.275445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.781 [2024-12-05 12:50:42.275481] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:18:59.781 pt0 00:18:59.781 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.781 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:18:59.781 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.781 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.781 6f4a808a-d8a8-4b79-948b-a9f6626c3140 00:18:59.781 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.781 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:18:59.781 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.781 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.781 db205eff-90e1-42da-92fc-cd2867d0bf3f 00:18:59.781 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.781 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:18:59.781 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.781 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.781 eb6d0e08-8098-4806-bb1b-9c83a98b1590 00:18:59.781 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.781 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:18:59.781 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:18:59.781 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.781 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.781 [2024-12-05 12:50:42.362143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev db205eff-90e1-42da-92fc-cd2867d0bf3f is claimed 00:18:59.781 [2024-12-05 12:50:42.362233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev eb6d0e08-8098-4806-bb1b-9c83a98b1590 is claimed 00:18:59.781 [2024-12-05 12:50:42.362366] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:59.781 [2024-12-05 12:50:42.362382] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:18:59.781 [2024-12-05 12:50:42.362668] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:59.781 [2024-12-05 12:50:42.362840] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:59.781 [2024-12-05 12:50:42.362849] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:19:00.042 [2024-12-05 12:50:42.363000] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.042 [2024-12-05 12:50:42.434410] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.042 [2024-12-05 12:50:42.462359] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:19:00.042 [2024-12-05 12:50:42.462388] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'db205eff-90e1-42da-92fc-cd2867d0bf3f' was resized: old size 131072, new size 204800 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.042 [2024-12-05 12:50:42.470306] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:19:00.042 [2024-12-05 12:50:42.470330] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'eb6d0e08-8098-4806-bb1b-9c83a98b1590' was resized: old size 131072, new size 204800 00:19:00.042 [2024-12-05 12:50:42.470360] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.042 [2024-12-05 12:50:42.542643] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.042 [2024-12-05 12:50:42.578227] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:19:00.042 [2024-12-05 12:50:42.578395] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:19:00.042 [2024-12-05 12:50:42.578460] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:00.042 [2024-12-05 12:50:42.578534] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:19:00.042 [2024-12-05 12:50:42.578697] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:00.042 [2024-12-05 12:50:42.578781] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:00.042 [2024-12-05 12:50:42.578851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.042 [2024-12-05 12:50:42.586142] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:19:00.042 [2024-12-05 12:50:42.586269] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.042 [2024-12-05 12:50:42.586304] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:19:00.042 [2024-12-05 12:50:42.586364] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.042 [2024-12-05 12:50:42.588539] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.042 [2024-12-05 12:50:42.588644] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:19:00.042 [2024-12-05 12:50:42.590273] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev db205eff-90e1-42da-92fc-cd2867d0bf3f 00:19:00.042 [2024-12-05 12:50:42.590433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev db205eff-90e1-42da-92fc-cd2867d0bf3f is claimed 00:19:00.042 [2024-12-05 12:50:42.590714] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev eb6d0e08-8098-4806-bb1b-9c83a98b1590 00:19:00.042 [2024-12-05 12:50:42.590739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev eb6d0e08-8098-4806-bb1b-9c83a98b1590 is claimed 00:19:00.042 [2024-12-05 12:50:42.590897] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev eb6d0e08-8098-4806-bb1b-9c83a98b1590 (2) smaller than existing raid bdev Raid (3) 00:19:00.042 [2024-12-05 12:50:42.590919] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev db205eff-90e1-42da-92fc-cd2867d0bf3f: File exists 00:19:00.042 [2024-12-05 12:50:42.590956] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:00.042 [2024-12-05 12:50:42.590966] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:19:00.042 pt0 00:19:00.042 [2024-12-05 12:50:42.591225] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:00.042 [2024-12-05 12:50:42.591365] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:00.042 [2024-12-05 12:50:42.591377] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:19:00.042 [2024-12-05 12:50:42.591553] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:19:00.042 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:19:00.043 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.043 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.043 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:19:00.043 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:19:00.043 [2024-12-05 12:50:42.602625] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:00.043 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.302 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:19:00.302 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:19:00.302 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:19:00.302 12:50:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 58819 00:19:00.302 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 58819 ']' 00:19:00.302 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 58819 00:19:00.302 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:19:00.302 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.302 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58819 00:19:00.302 killing process with pid 58819 00:19:00.302 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:00.302 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:00.302 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58819' 00:19:00.302 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 58819 00:19:00.302 [2024-12-05 12:50:42.651723] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:00.302 12:50:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 58819 00:19:00.302 [2024-12-05 12:50:42.651797] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:00.302 [2024-12-05 12:50:42.651844] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:00.302 [2024-12-05 12:50:42.651853] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:19:01.242 [2024-12-05 12:50:43.557871] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:01.812 12:50:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:19:01.812 00:19:01.812 real 0m3.258s 00:19:01.812 user 0m3.374s 00:19:01.812 sys 0m0.426s 00:19:01.812 12:50:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:01.812 12:50:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.812 ************************************ 00:19:01.812 END TEST raid0_resize_superblock_test 00:19:01.812 ************************************ 00:19:01.812 12:50:44 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:19:01.812 12:50:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:01.812 12:50:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:01.812 12:50:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:01.812 ************************************ 00:19:01.812 START TEST raid1_resize_superblock_test 00:19:01.812 ************************************ 00:19:01.812 Process raid pid: 58901 00:19:01.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.812 12:50:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:19:01.812 12:50:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:19:01.812 12:50:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=58901 00:19:01.812 12:50:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 58901' 00:19:01.812 12:50:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 58901 00:19:01.812 12:50:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 58901 ']' 00:19:01.812 12:50:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.812 12:50:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:01.812 12:50:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.812 12:50:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:01.812 12:50:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.812 12:50:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:02.072 [2024-12-05 12:50:44.396336] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:19:02.072 [2024-12-05 12:50:44.396459] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:02.072 [2024-12-05 12:50:44.554851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.331 [2024-12-05 12:50:44.658510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.331 [2024-12-05 12:50:44.799435] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:02.331 [2024-12-05 12:50:44.799476] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:02.900 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:02.900 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:19:02.900 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:19:02.900 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.900 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.160 malloc0 00:19:03.160 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.160 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:19:03.160 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.160 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.160 [2024-12-05 12:50:45.618543] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:19:03.160 [2024-12-05 12:50:45.618712] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.160 [2024-12-05 12:50:45.618741] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:03.160 [2024-12-05 12:50:45.618753] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.160 [2024-12-05 12:50:45.620934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.160 [2024-12-05 12:50:45.620969] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:19:03.160 pt0 00:19:03.160 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.160 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:19:03.160 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.160 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.160 375440d8-a9e7-471d-b7a6-5b931f21a0de 00:19:03.160 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.160 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:19:03.160 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.160 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.160 21dece05-dbd5-405d-a3d6-901385f33e12 00:19:03.160 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.160 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:19:03.160 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.160 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.160 9c89edb0-1d35-4135-9def-df0368a403e3 00:19:03.160 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.160 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:19:03.160 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:19:03.160 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.160 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.160 [2024-12-05 12:50:45.704566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 21dece05-dbd5-405d-a3d6-901385f33e12 is claimed 00:19:03.160 [2024-12-05 12:50:45.704648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9c89edb0-1d35-4135-9def-df0368a403e3 is claimed 00:19:03.160 [2024-12-05 12:50:45.704788] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:03.160 [2024-12-05 12:50:45.704802] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:19:03.160 [2024-12-05 12:50:45.705080] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:03.160 [2024-12-05 12:50:45.705251] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:03.160 [2024-12-05 12:50:45.705260] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:19:03.160 [2024-12-05 12:50:45.705405] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:03.160 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.160 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:19:03.160 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:19:03.160 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.160 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.160 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.160 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:19:03.160 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:19:03.160 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.160 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.160 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:19:03.420 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.420 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:19:03.420 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:19:03.420 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:19:03.420 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.420 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.420 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:19:03.420 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:19:03.420 [2024-12-05 12:50:45.776838] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:03.420 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.420 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:19:03.420 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:19:03.420 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:19:03.420 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:19:03.420 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.420 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.420 [2024-12-05 12:50:45.808772] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:19:03.420 [2024-12-05 12:50:45.808896] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '21dece05-dbd5-405d-a3d6-901385f33e12' was resized: old size 131072, new size 204800 00:19:03.420 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.420 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:19:03.420 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.420 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.420 [2024-12-05 12:50:45.816714] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:19:03.420 [2024-12-05 12:50:45.816734] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '9c89edb0-1d35-4135-9def-df0368a403e3' was resized: old size 131072, new size 204800 00:19:03.420 [2024-12-05 12:50:45.816761] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:19:03.420 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.420 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:19:03.420 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:19:03.420 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.420 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.420 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.420 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:19:03.420 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:19:03.420 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:19:03.421 [2024-12-05 12:50:45.896846] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.421 [2024-12-05 12:50:45.920619] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:19:03.421 [2024-12-05 12:50:45.920688] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:19:03.421 [2024-12-05 12:50:45.920712] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:19:03.421 [2024-12-05 12:50:45.920852] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:03.421 [2024-12-05 12:50:45.921023] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:03.421 [2024-12-05 12:50:45.921090] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:03.421 [2024-12-05 12:50:45.921102] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.421 [2024-12-05 12:50:45.928547] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:19:03.421 [2024-12-05 12:50:45.928591] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.421 [2024-12-05 12:50:45.928607] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:19:03.421 [2024-12-05 12:50:45.928620] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.421 [2024-12-05 12:50:45.930788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.421 [2024-12-05 12:50:45.930821] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:19:03.421 [2024-12-05 12:50:45.932394] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 21dece05-dbd5-405d-a3d6-901385f33e12 00:19:03.421 [2024-12-05 12:50:45.932566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 21dece05-dbd5-405d-a3d6-901385f33e12 is claimed 00:19:03.421 [2024-12-05 12:50:45.932681] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 9c89edb0-1d35-4135-9def-df0368a403e3 00:19:03.421 [2024-12-05 12:50:45.932700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9c89edb0-1d35-4135-9def-df0368a403e3 is claimed 00:19:03.421 [2024-12-05 12:50:45.932817] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 9c89edb0-1d35-4135-9def-df0368a403e3 (2) smaller than existing raid bdev Raid (3) 00:19:03.421 [2024-12-05 12:50:45.932837] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 21dece05-dbd5-405d-a3d6-901385f33e12: File exists 00:19:03.421 [2024-12-05 12:50:45.932877] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:03.421 [2024-12-05 12:50:45.932887] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:03.421 [2024-12-05 12:50:45.933124] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:03.421 [2024-12-05 12:50:45.933265] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:03.421 [2024-12-05 12:50:45.933273] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:19:03.421 pt0 00:19:03.421 [2024-12-05 12:50:45.933450] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:19:03.421 [2024-12-05 12:50:45.949142] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 58901 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 58901 ']' 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 58901 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:03.421 12:50:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58901 00:19:03.681 killing process with pid 58901 00:19:03.681 12:50:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:03.681 12:50:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:03.681 12:50:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58901' 00:19:03.681 12:50:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 58901 00:19:03.682 [2024-12-05 12:50:46.003849] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:03.682 [2024-12-05 12:50:46.003917] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:03.682 12:50:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 58901 00:19:03.682 [2024-12-05 12:50:46.003968] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:03.682 [2024-12-05 12:50:46.003977] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:19:04.620 [2024-12-05 12:50:46.901474] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:05.211 12:50:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:19:05.211 00:19:05.211 real 0m3.181s 00:19:05.211 user 0m3.374s 00:19:05.211 sys 0m0.405s 00:19:05.211 12:50:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:05.211 12:50:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.211 ************************************ 00:19:05.211 END TEST raid1_resize_superblock_test 00:19:05.211 ************************************ 00:19:05.211 12:50:47 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:19:05.211 12:50:47 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:19:05.211 12:50:47 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:19:05.211 12:50:47 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:19:05.211 12:50:47 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:19:05.212 12:50:47 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:19:05.212 12:50:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:05.212 12:50:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.212 12:50:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:05.212 ************************************ 00:19:05.212 START TEST raid_function_test_raid0 00:19:05.212 ************************************ 00:19:05.212 12:50:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:19:05.212 12:50:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:19:05.212 Process raid pid: 58992 00:19:05.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.212 12:50:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:19:05.212 12:50:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:19:05.212 12:50:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=58992 00:19:05.212 12:50:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 58992' 00:19:05.212 12:50:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:05.212 12:50:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 58992 00:19:05.212 12:50:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 58992 ']' 00:19:05.212 12:50:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.212 12:50:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:05.212 12:50:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.212 12:50:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:05.212 12:50:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:19:05.212 [2024-12-05 12:50:47.632677] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:19:05.212 [2024-12-05 12:50:47.632935] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:05.502 [2024-12-05 12:50:47.799113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.502 [2024-12-05 12:50:47.954462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.762 [2024-12-05 12:50:48.094621] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:05.762 [2024-12-05 12:50:48.094808] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:06.022 12:50:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:06.022 12:50:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:19:06.022 12:50:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:19:06.022 12:50:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.022 12:50:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:19:06.022 Base_1 00:19:06.022 12:50:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.022 12:50:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:19:06.022 12:50:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.022 12:50:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:19:06.281 Base_2 00:19:06.281 12:50:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.282 12:50:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:19:06.282 12:50:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.282 12:50:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:19:06.282 [2024-12-05 12:50:48.622408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:19:06.282 [2024-12-05 12:50:48.624354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:19:06.282 [2024-12-05 12:50:48.624418] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:06.282 [2024-12-05 12:50:48.624430] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:19:06.282 [2024-12-05 12:50:48.624711] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:06.282 [2024-12-05 12:50:48.624841] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:06.282 [2024-12-05 12:50:48.624849] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:19:06.282 [2024-12-05 12:50:48.624984] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:06.282 12:50:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.282 12:50:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:06.282 12:50:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:19:06.282 12:50:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.282 12:50:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:19:06.282 12:50:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.282 12:50:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:19:06.282 12:50:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:19:06.282 12:50:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:19:06.282 12:50:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:06.282 12:50:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:19:06.282 12:50:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:06.282 12:50:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:06.282 12:50:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:06.282 12:50:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:19:06.282 12:50:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:06.282 12:50:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:06.282 12:50:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:19:06.282 [2024-12-05 12:50:48.822521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:06.282 /dev/nbd0 00:19:06.282 12:50:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:06.540 12:50:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:06.540 12:50:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:06.540 12:50:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:19:06.540 12:50:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:06.540 12:50:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:06.540 12:50:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:06.540 12:50:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:19:06.540 12:50:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:06.541 12:50:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:06.541 12:50:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:06.541 1+0 records in 00:19:06.541 1+0 records out 00:19:06.541 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000602014 s, 6.8 MB/s 00:19:06.541 12:50:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:06.541 12:50:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:19:06.541 12:50:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:06.541 12:50:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:06.541 12:50:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:19:06.541 12:50:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:06.541 12:50:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:06.541 12:50:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:19:06.541 12:50:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:19:06.541 12:50:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:19:06.541 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:06.541 { 00:19:06.541 "nbd_device": "/dev/nbd0", 00:19:06.541 "bdev_name": "raid" 00:19:06.541 } 00:19:06.541 ]' 00:19:06.541 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:06.541 { 00:19:06.541 "nbd_device": "/dev/nbd0", 00:19:06.541 "bdev_name": "raid" 00:19:06.541 } 00:19:06.541 ]' 00:19:06.541 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:06.541 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:19:06.541 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:19:06.541 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:06.541 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:19:06.541 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:19:06.541 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:19:06.541 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:19:06.541 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:19:06.541 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:19:06.541 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:19:06.541 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:19:06.800 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:19:06.800 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:19:06.800 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:19:06.800 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:19:06.800 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:19:06.800 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:19:06.800 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:19:06.800 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:19:06.800 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:19:06.800 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:19:06.800 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:19:06.800 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:19:06.800 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:19:06.800 4096+0 records in 00:19:06.800 4096+0 records out 00:19:06.800 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0180322 s, 116 MB/s 00:19:06.800 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:19:06.800 4096+0 records in 00:19:06.800 4096+0 records out 00:19:06.800 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.202685 s, 10.3 MB/s 00:19:06.800 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:19:06.800 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:19:06.800 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:19:06.800 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:19:06.800 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:19:06.800 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:19:06.800 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:19:06.800 128+0 records in 00:19:06.800 128+0 records out 00:19:06.800 65536 bytes (66 kB, 64 KiB) copied, 0.000526824 s, 124 MB/s 00:19:06.800 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:19:06.800 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:19:07.061 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:19:07.061 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:19:07.061 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:19:07.061 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:19:07.061 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:19:07.061 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:19:07.061 2035+0 records in 00:19:07.061 2035+0 records out 00:19:07.061 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00812275 s, 128 MB/s 00:19:07.061 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:19:07.061 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:19:07.061 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:19:07.061 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:19:07.061 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:19:07.061 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:19:07.061 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:19:07.061 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:19:07.061 456+0 records in 00:19:07.061 456+0 records out 00:19:07.061 233472 bytes (233 kB, 228 KiB) copied, 0.0020955 s, 111 MB/s 00:19:07.061 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:19:07.061 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:19:07.061 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:19:07.061 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:19:07.061 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:19:07.061 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:19:07.061 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:07.061 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:07.061 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:07.061 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:07.061 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:19:07.061 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:07.061 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:07.321 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:07.321 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:07.321 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:07.321 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:07.321 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:07.321 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:07.321 [2024-12-05 12:50:49.653813] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:07.321 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:19:07.322 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:19:07.322 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:19:07.322 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:19:07.322 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:19:07.322 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:07.322 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:07.322 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:07.322 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:07.322 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:07.322 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:19:07.322 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:19:07.322 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:19:07.322 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:19:07.322 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:19:07.322 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:19:07.322 12:50:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 58992 00:19:07.322 12:50:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 58992 ']' 00:19:07.322 12:50:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 58992 00:19:07.322 12:50:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:19:07.322 12:50:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:07.322 12:50:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58992 00:19:07.322 killing process with pid 58992 00:19:07.322 12:50:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:07.322 12:50:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:07.322 12:50:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58992' 00:19:07.322 12:50:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 58992 00:19:07.322 [2024-12-05 12:50:49.897544] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:07.322 12:50:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 58992 00:19:07.322 [2024-12-05 12:50:49.897637] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:07.322 [2024-12-05 12:50:49.897684] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:07.322 [2024-12-05 12:50:49.897698] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:19:07.583 [2024-12-05 12:50:50.025366] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:08.528 12:50:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:19:08.528 00:19:08.528 real 0m3.181s 00:19:08.528 user 0m3.832s 00:19:08.528 sys 0m0.686s 00:19:08.528 12:50:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:08.528 12:50:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:19:08.528 ************************************ 00:19:08.528 END TEST raid_function_test_raid0 00:19:08.528 ************************************ 00:19:08.528 12:50:50 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:19:08.528 12:50:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:08.528 12:50:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:08.528 12:50:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:08.528 ************************************ 00:19:08.528 START TEST raid_function_test_concat 00:19:08.528 ************************************ 00:19:08.528 12:50:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:19:08.528 Process raid pid: 59116 00:19:08.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.528 12:50:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:19:08.528 12:50:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:19:08.529 12:50:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:19:08.529 12:50:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=59116 00:19:08.529 12:50:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 59116' 00:19:08.529 12:50:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 59116 00:19:08.529 12:50:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 59116 ']' 00:19:08.529 12:50:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.529 12:50:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:08.529 12:50:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.529 12:50:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:08.529 12:50:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:19:08.529 12:50:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:08.529 [2024-12-05 12:50:50.885687] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:19:08.529 [2024-12-05 12:50:50.885812] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:08.529 [2024-12-05 12:50:51.046729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.790 [2024-12-05 12:50:51.154631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.790 [2024-12-05 12:50:51.296699] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:08.790 [2024-12-05 12:50:51.296898] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:09.360 12:50:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:09.360 12:50:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:19:09.360 12:50:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:19:09.360 12:50:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.360 12:50:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:19:09.360 Base_1 00:19:09.360 12:50:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.360 12:50:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:19:09.360 12:50:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.360 12:50:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:19:09.360 Base_2 00:19:09.360 12:50:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.360 12:50:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:19:09.360 12:50:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.360 12:50:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:19:09.360 [2024-12-05 12:50:51.789278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:19:09.360 [2024-12-05 12:50:51.791121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:19:09.360 [2024-12-05 12:50:51.791185] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:09.360 [2024-12-05 12:50:51.791196] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:19:09.360 [2024-12-05 12:50:51.791451] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:09.360 [2024-12-05 12:50:51.791595] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:09.360 [2024-12-05 12:50:51.791605] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:19:09.360 [2024-12-05 12:50:51.791738] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:09.360 12:50:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.360 12:50:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:19:09.360 12:50:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:09.360 12:50:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.360 12:50:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:19:09.360 12:50:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.360 12:50:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:19:09.360 12:50:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:19:09.360 12:50:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:19:09.360 12:50:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:09.360 12:50:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:19:09.360 12:50:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:09.360 12:50:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:09.361 12:50:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:09.361 12:50:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:19:09.361 12:50:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:09.361 12:50:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:09.361 12:50:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:19:09.620 [2024-12-05 12:50:52.017365] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:09.620 /dev/nbd0 00:19:09.620 12:50:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:09.620 12:50:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:09.620 12:50:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:09.620 12:50:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:19:09.620 12:50:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:09.620 12:50:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:09.620 12:50:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:09.620 12:50:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:19:09.620 12:50:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:09.620 12:50:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:09.620 12:50:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:09.620 1+0 records in 00:19:09.620 1+0 records out 00:19:09.620 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030587 s, 13.4 MB/s 00:19:09.620 12:50:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:09.620 12:50:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:19:09.620 12:50:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:09.620 12:50:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:09.620 12:50:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:19:09.620 12:50:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:09.620 12:50:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:09.620 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:19:09.620 12:50:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:19:09.620 12:50:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:19:09.880 12:50:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:09.880 { 00:19:09.880 "nbd_device": "/dev/nbd0", 00:19:09.880 "bdev_name": "raid" 00:19:09.880 } 00:19:09.880 ]' 00:19:09.880 12:50:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:09.880 { 00:19:09.880 "nbd_device": "/dev/nbd0", 00:19:09.880 "bdev_name": "raid" 00:19:09.880 } 00:19:09.880 ]' 00:19:09.880 12:50:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:09.880 12:50:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:19:09.880 12:50:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:19:09.880 12:50:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:09.880 12:50:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:19:09.880 12:50:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:19:09.880 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:19:09.880 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:19:09.880 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:19:09.880 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:19:09.880 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:19:09.880 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:19:09.880 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:19:09.880 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:19:09.880 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:19:09.880 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:19:09.880 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:19:09.880 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:19:09.880 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:19:09.880 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:19:09.880 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:19:09.880 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:19:09.880 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:19:09.880 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:19:09.880 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:19:09.880 4096+0 records in 00:19:09.880 4096+0 records out 00:19:09.880 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0209229 s, 100 MB/s 00:19:09.880 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:19:10.139 4096+0 records in 00:19:10.139 4096+0 records out 00:19:10.139 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.237071 s, 8.8 MB/s 00:19:10.139 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:19:10.139 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:19:10.139 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:19:10.139 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:19:10.139 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:19:10.139 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:19:10.139 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:19:10.139 128+0 records in 00:19:10.139 128+0 records out 00:19:10.139 65536 bytes (66 kB, 64 KiB) copied, 0.000642005 s, 102 MB/s 00:19:10.139 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:19:10.139 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:19:10.139 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:19:10.139 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:19:10.139 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:19:10.139 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:19:10.139 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:19:10.139 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:19:10.139 2035+0 records in 00:19:10.139 2035+0 records out 00:19:10.139 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00890714 s, 117 MB/s 00:19:10.139 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:19:10.139 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:19:10.139 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:19:10.139 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:19:10.139 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:19:10.139 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:19:10.139 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:19:10.139 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:19:10.139 456+0 records in 00:19:10.139 456+0 records out 00:19:10.139 233472 bytes (233 kB, 228 KiB) copied, 0.00296622 s, 78.7 MB/s 00:19:10.139 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:19:10.139 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:19:10.139 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:19:10.139 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:19:10.139 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:19:10.139 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:19:10.139 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:10.139 12:50:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:10.139 12:50:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:10.139 12:50:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:10.139 12:50:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:19:10.139 12:50:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:10.139 12:50:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:10.398 12:50:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:10.398 12:50:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:10.398 12:50:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:10.398 12:50:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:10.398 12:50:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:10.398 12:50:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:10.398 [2024-12-05 12:50:52.890574] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:10.398 12:50:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:19:10.398 12:50:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:19:10.398 12:50:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:19:10.398 12:50:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:19:10.398 12:50:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:19:10.657 12:50:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:10.657 12:50:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:10.657 12:50:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:10.657 12:50:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:10.657 12:50:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:19:10.657 12:50:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:10.657 12:50:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:19:10.657 12:50:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:19:10.657 12:50:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:19:10.657 12:50:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:19:10.657 12:50:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:19:10.657 12:50:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 59116 00:19:10.657 12:50:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 59116 ']' 00:19:10.657 12:50:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 59116 00:19:10.657 12:50:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:19:10.657 12:50:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:10.657 12:50:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59116 00:19:10.657 killing process with pid 59116 00:19:10.657 12:50:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:10.657 12:50:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:10.657 12:50:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59116' 00:19:10.657 12:50:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 59116 00:19:10.657 [2024-12-05 12:50:53.170767] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:10.657 12:50:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 59116 00:19:10.657 [2024-12-05 12:50:53.170859] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:10.657 [2024-12-05 12:50:53.170909] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:10.657 [2024-12-05 12:50:53.170922] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:19:10.918 [2024-12-05 12:50:53.301149] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:11.488 12:50:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:19:11.488 00:19:11.488 real 0m3.204s 00:19:11.488 user 0m3.857s 00:19:11.488 sys 0m0.729s 00:19:11.488 12:50:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:11.488 12:50:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:19:11.488 ************************************ 00:19:11.488 END TEST raid_function_test_concat 00:19:11.488 ************************************ 00:19:11.488 12:50:54 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:19:11.488 12:50:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:11.488 12:50:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:11.488 12:50:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:11.748 ************************************ 00:19:11.748 START TEST raid0_resize_test 00:19:11.748 ************************************ 00:19:11.748 12:50:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:19:11.748 12:50:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:19:11.748 12:50:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:19:11.748 12:50:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:19:11.748 12:50:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:19:11.748 12:50:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:19:11.748 12:50:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:19:11.748 12:50:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:19:11.748 12:50:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:19:11.748 12:50:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=59233 00:19:11.748 Process raid pid: 59233 00:19:11.748 12:50:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 59233' 00:19:11.748 12:50:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 59233 00:19:11.748 12:50:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 59233 ']' 00:19:11.748 12:50:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:11.748 12:50:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.748 12:50:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:11.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.748 12:50:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.748 12:50:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:11.748 12:50:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.748 [2024-12-05 12:50:54.144615] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:19:11.748 [2024-12-05 12:50:54.144738] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:11.748 [2024-12-05 12:50:54.306065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.007 [2024-12-05 12:50:54.407227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.007 [2024-12-05 12:50:54.545890] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:12.008 [2024-12-05 12:50:54.545933] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:12.609 12:50:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:12.609 12:50:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:19:12.609 12:50:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:19:12.609 12:50:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.609 12:50:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.609 Base_1 00:19:12.609 12:50:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.609 12:50:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:19:12.609 12:50:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.609 12:50:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.609 Base_2 00:19:12.609 12:50:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.609 12:50:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:19:12.609 12:50:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:19:12.609 12:50:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.609 12:50:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.609 [2024-12-05 12:50:55.013541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:19:12.609 [2024-12-05 12:50:55.015364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:19:12.609 [2024-12-05 12:50:55.015421] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:12.609 [2024-12-05 12:50:55.015433] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:19:12.609 [2024-12-05 12:50:55.015709] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:19:12.609 [2024-12-05 12:50:55.015837] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:12.609 [2024-12-05 12:50:55.015851] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:19:12.609 [2024-12-05 12:50:55.015982] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:12.609 12:50:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.609 12:50:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:19:12.609 12:50:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.609 12:50:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.609 [2024-12-05 12:50:55.021517] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:19:12.609 [2024-12-05 12:50:55.021544] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:19:12.609 true 00:19:12.609 12:50:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.609 12:50:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:19:12.609 12:50:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.609 12:50:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:19:12.609 12:50:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.609 [2024-12-05 12:50:55.033700] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:12.609 12:50:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.609 12:50:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:19:12.609 12:50:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:19:12.609 12:50:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:19:12.609 12:50:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:19:12.609 12:50:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:19:12.609 12:50:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:19:12.609 12:50:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.609 12:50:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.609 [2024-12-05 12:50:55.065535] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:19:12.610 [2024-12-05 12:50:55.065564] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:19:12.610 [2024-12-05 12:50:55.065594] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:19:12.610 true 00:19:12.610 12:50:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.610 12:50:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:19:12.610 12:50:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.610 12:50:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.610 12:50:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:19:12.610 [2024-12-05 12:50:55.077728] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:12.610 12:50:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.610 12:50:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:19:12.610 12:50:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:19:12.610 12:50:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:19:12.610 12:50:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:19:12.610 12:50:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:19:12.610 12:50:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 59233 00:19:12.610 12:50:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 59233 ']' 00:19:12.610 12:50:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 59233 00:19:12.610 12:50:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:19:12.610 12:50:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:12.610 12:50:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59233 00:19:12.610 12:50:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:12.610 killing process with pid 59233 00:19:12.610 12:50:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:12.610 12:50:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59233' 00:19:12.610 12:50:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 59233 00:19:12.610 [2024-12-05 12:50:55.126095] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:12.610 12:50:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 59233 00:19:12.610 [2024-12-05 12:50:55.126179] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:12.610 [2024-12-05 12:50:55.126230] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:12.610 [2024-12-05 12:50:55.126244] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:19:12.610 [2024-12-05 12:50:55.137396] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:13.550 12:50:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:19:13.550 00:19:13.550 real 0m1.781s 00:19:13.550 user 0m1.925s 00:19:13.550 sys 0m0.258s 00:19:13.550 12:50:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:13.550 12:50:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.550 ************************************ 00:19:13.550 END TEST raid0_resize_test 00:19:13.550 ************************************ 00:19:13.550 12:50:55 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:19:13.550 12:50:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:13.550 12:50:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:13.550 12:50:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:13.550 ************************************ 00:19:13.550 START TEST raid1_resize_test 00:19:13.550 ************************************ 00:19:13.550 12:50:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:19:13.550 12:50:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:19:13.550 12:50:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:19:13.550 12:50:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:19:13.550 12:50:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:19:13.550 12:50:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:19:13.550 12:50:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:19:13.550 12:50:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:19:13.550 12:50:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:19:13.550 12:50:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=59289 00:19:13.550 Process raid pid: 59289 00:19:13.550 12:50:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 59289' 00:19:13.550 12:50:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 59289 00:19:13.550 12:50:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 59289 ']' 00:19:13.550 12:50:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.550 12:50:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:13.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:13.550 12:50:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.550 12:50:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:13.550 12:50:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:13.550 12:50:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.550 [2024-12-05 12:50:55.975064] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:19:13.550 [2024-12-05 12:50:55.975210] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:13.550 [2024-12-05 12:50:56.130738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.811 [2024-12-05 12:50:56.231120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.811 [2024-12-05 12:50:56.366938] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:13.811 [2024-12-05 12:50:56.366980] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.380 Base_1 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.380 Base_2 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.380 [2024-12-05 12:50:56.846659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:19:14.380 [2024-12-05 12:50:56.848452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:19:14.380 [2024-12-05 12:50:56.848528] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:14.380 [2024-12-05 12:50:56.848540] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:14.380 [2024-12-05 12:50:56.848785] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:19:14.380 [2024-12-05 12:50:56.848899] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:14.380 [2024-12-05 12:50:56.848907] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:19:14.380 [2024-12-05 12:50:56.849032] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.380 [2024-12-05 12:50:56.854651] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:19:14.380 [2024-12-05 12:50:56.854679] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:19:14.380 true 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:19:14.380 [2024-12-05 12:50:56.862838] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.380 [2024-12-05 12:50:56.898677] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:19:14.380 [2024-12-05 12:50:56.898707] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:19:14.380 [2024-12-05 12:50:56.898736] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:19:14.380 true 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:19:14.380 [2024-12-05 12:50:56.910862] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 59289 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 59289 ']' 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 59289 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59289 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:14.380 killing process with pid 59289 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59289' 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 59289 00:19:14.380 [2024-12-05 12:50:56.959586] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:14.380 12:50:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 59289 00:19:14.380 [2024-12-05 12:50:56.959659] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:14.380 [2024-12-05 12:50:56.960097] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:14.380 [2024-12-05 12:50:56.960120] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:19:14.639 [2024-12-05 12:50:56.970707] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:15.208 12:50:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:19:15.208 00:19:15.208 real 0m1.759s 00:19:15.208 user 0m1.905s 00:19:15.208 sys 0m0.250s 00:19:15.208 12:50:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:15.208 12:50:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.208 ************************************ 00:19:15.208 END TEST raid1_resize_test 00:19:15.208 ************************************ 00:19:15.208 12:50:57 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:19:15.208 12:50:57 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:19:15.208 12:50:57 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:19:15.208 12:50:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:15.208 12:50:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:15.208 12:50:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:15.208 ************************************ 00:19:15.208 START TEST raid_state_function_test 00:19:15.208 ************************************ 00:19:15.208 12:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:19:15.208 12:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:19:15.208 12:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:15.208 12:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:19:15.208 12:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:15.208 12:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:15.208 12:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:15.208 12:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:15.208 12:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:15.208 12:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:15.208 12:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:15.208 12:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:15.208 12:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:15.208 12:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:15.208 12:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:15.208 12:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:15.208 12:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:15.208 12:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:15.208 12:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:15.208 12:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:19:15.208 12:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:15.208 12:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:15.208 12:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:19:15.208 12:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:19:15.208 12:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=59335 00:19:15.208 Process raid pid: 59335 00:19:15.208 12:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 59335' 00:19:15.208 12:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 59335 00:19:15.208 12:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 59335 ']' 00:19:15.208 12:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.208 12:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:15.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.208 12:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.208 12:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:15.208 12:50:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.208 12:50:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:15.208 [2024-12-05 12:50:57.776905] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:19:15.208 [2024-12-05 12:50:57.777537] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:15.469 [2024-12-05 12:50:57.932064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.469 [2024-12-05 12:50:58.014046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.730 [2024-12-05 12:50:58.124055] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:15.730 [2024-12-05 12:50:58.124088] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:16.301 12:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:16.301 12:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:19:16.301 12:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:16.301 12:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.301 12:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.301 [2024-12-05 12:50:58.618855] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:16.301 [2024-12-05 12:50:58.618906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:16.301 [2024-12-05 12:50:58.618914] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:16.301 [2024-12-05 12:50:58.618922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:16.301 12:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.301 12:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:19:16.301 12:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:16.301 12:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:16.301 12:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:16.301 12:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:16.301 12:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:16.301 12:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:16.301 12:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:16.301 12:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:16.301 12:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:16.301 12:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.301 12:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:16.301 12:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.301 12:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.301 12:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.301 12:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:16.301 "name": "Existed_Raid", 00:19:16.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.301 "strip_size_kb": 64, 00:19:16.301 "state": "configuring", 00:19:16.301 "raid_level": "raid0", 00:19:16.301 "superblock": false, 00:19:16.301 "num_base_bdevs": 2, 00:19:16.301 "num_base_bdevs_discovered": 0, 00:19:16.301 "num_base_bdevs_operational": 2, 00:19:16.301 "base_bdevs_list": [ 00:19:16.301 { 00:19:16.301 "name": "BaseBdev1", 00:19:16.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.301 "is_configured": false, 00:19:16.301 "data_offset": 0, 00:19:16.301 "data_size": 0 00:19:16.301 }, 00:19:16.301 { 00:19:16.301 "name": "BaseBdev2", 00:19:16.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.301 "is_configured": false, 00:19:16.301 "data_offset": 0, 00:19:16.301 "data_size": 0 00:19:16.301 } 00:19:16.301 ] 00:19:16.301 }' 00:19:16.301 12:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:16.301 12:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.647 12:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:16.647 12:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.647 12:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.647 [2024-12-05 12:50:58.934877] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:16.647 [2024-12-05 12:50:58.934906] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:16.647 12:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.647 12:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:16.647 12:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.647 12:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.647 [2024-12-05 12:50:58.942885] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:16.647 [2024-12-05 12:50:58.942920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:16.647 [2024-12-05 12:50:58.942927] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:16.647 [2024-12-05 12:50:58.942937] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:16.647 12:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.647 12:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:16.647 12:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.647 12:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.647 [2024-12-05 12:50:58.970986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:16.647 BaseBdev1 00:19:16.647 12:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.647 12:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:16.647 12:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:16.647 12:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:16.647 12:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:16.647 12:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:16.647 12:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:16.647 12:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:16.647 12:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.647 12:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.647 12:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.647 12:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:16.647 12:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.647 12:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.647 [ 00:19:16.647 { 00:19:16.647 "name": "BaseBdev1", 00:19:16.647 "aliases": [ 00:19:16.647 "37ce8b82-1cf1-453c-a920-6c227b4e2e30" 00:19:16.647 ], 00:19:16.647 "product_name": "Malloc disk", 00:19:16.647 "block_size": 512, 00:19:16.647 "num_blocks": 65536, 00:19:16.647 "uuid": "37ce8b82-1cf1-453c-a920-6c227b4e2e30", 00:19:16.647 "assigned_rate_limits": { 00:19:16.647 "rw_ios_per_sec": 0, 00:19:16.647 "rw_mbytes_per_sec": 0, 00:19:16.647 "r_mbytes_per_sec": 0, 00:19:16.647 "w_mbytes_per_sec": 0 00:19:16.647 }, 00:19:16.647 "claimed": true, 00:19:16.647 "claim_type": "exclusive_write", 00:19:16.647 "zoned": false, 00:19:16.647 "supported_io_types": { 00:19:16.647 "read": true, 00:19:16.647 "write": true, 00:19:16.647 "unmap": true, 00:19:16.647 "flush": true, 00:19:16.647 "reset": true, 00:19:16.647 "nvme_admin": false, 00:19:16.647 "nvme_io": false, 00:19:16.647 "nvme_io_md": false, 00:19:16.647 "write_zeroes": true, 00:19:16.647 "zcopy": true, 00:19:16.647 "get_zone_info": false, 00:19:16.647 "zone_management": false, 00:19:16.647 "zone_append": false, 00:19:16.647 "compare": false, 00:19:16.647 "compare_and_write": false, 00:19:16.647 "abort": true, 00:19:16.647 "seek_hole": false, 00:19:16.647 "seek_data": false, 00:19:16.647 "copy": true, 00:19:16.647 "nvme_iov_md": false 00:19:16.647 }, 00:19:16.647 "memory_domains": [ 00:19:16.647 { 00:19:16.647 "dma_device_id": "system", 00:19:16.647 "dma_device_type": 1 00:19:16.647 }, 00:19:16.647 { 00:19:16.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:16.647 "dma_device_type": 2 00:19:16.647 } 00:19:16.647 ], 00:19:16.647 "driver_specific": {} 00:19:16.647 } 00:19:16.647 ] 00:19:16.647 12:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.647 12:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:16.647 12:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:19:16.647 12:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:16.647 12:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:16.647 12:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:16.647 12:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:16.647 12:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:16.647 12:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:16.647 12:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:16.647 12:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:16.648 12:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:16.648 12:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.648 12:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.648 12:50:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.648 12:50:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:16.648 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.648 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:16.648 "name": "Existed_Raid", 00:19:16.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.648 "strip_size_kb": 64, 00:19:16.648 "state": "configuring", 00:19:16.648 "raid_level": "raid0", 00:19:16.648 "superblock": false, 00:19:16.648 "num_base_bdevs": 2, 00:19:16.648 "num_base_bdevs_discovered": 1, 00:19:16.648 "num_base_bdevs_operational": 2, 00:19:16.648 "base_bdevs_list": [ 00:19:16.648 { 00:19:16.648 "name": "BaseBdev1", 00:19:16.648 "uuid": "37ce8b82-1cf1-453c-a920-6c227b4e2e30", 00:19:16.648 "is_configured": true, 00:19:16.648 "data_offset": 0, 00:19:16.648 "data_size": 65536 00:19:16.648 }, 00:19:16.648 { 00:19:16.648 "name": "BaseBdev2", 00:19:16.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.648 "is_configured": false, 00:19:16.648 "data_offset": 0, 00:19:16.648 "data_size": 0 00:19:16.648 } 00:19:16.648 ] 00:19:16.648 }' 00:19:16.648 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:16.648 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.921 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:16.921 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.921 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.921 [2024-12-05 12:50:59.303094] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:16.921 [2024-12-05 12:50:59.303145] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:16.921 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.921 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:16.921 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.921 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.921 [2024-12-05 12:50:59.311152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:16.921 [2024-12-05 12:50:59.312738] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:16.921 [2024-12-05 12:50:59.312777] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:16.921 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.921 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:16.921 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:16.921 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:19:16.921 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:16.921 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:16.921 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:16.921 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:16.921 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:16.921 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:16.921 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:16.921 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:16.921 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:16.921 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:16.921 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.921 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.921 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.921 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.921 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:16.921 "name": "Existed_Raid", 00:19:16.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.921 "strip_size_kb": 64, 00:19:16.921 "state": "configuring", 00:19:16.921 "raid_level": "raid0", 00:19:16.921 "superblock": false, 00:19:16.921 "num_base_bdevs": 2, 00:19:16.921 "num_base_bdevs_discovered": 1, 00:19:16.921 "num_base_bdevs_operational": 2, 00:19:16.921 "base_bdevs_list": [ 00:19:16.921 { 00:19:16.921 "name": "BaseBdev1", 00:19:16.921 "uuid": "37ce8b82-1cf1-453c-a920-6c227b4e2e30", 00:19:16.921 "is_configured": true, 00:19:16.921 "data_offset": 0, 00:19:16.921 "data_size": 65536 00:19:16.921 }, 00:19:16.921 { 00:19:16.921 "name": "BaseBdev2", 00:19:16.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.921 "is_configured": false, 00:19:16.921 "data_offset": 0, 00:19:16.921 "data_size": 0 00:19:16.921 } 00:19:16.921 ] 00:19:16.921 }' 00:19:16.921 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:16.921 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.183 [2024-12-05 12:50:59.637861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:17.183 [2024-12-05 12:50:59.637906] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:17.183 [2024-12-05 12:50:59.637915] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:19:17.183 [2024-12-05 12:50:59.638171] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:17.183 [2024-12-05 12:50:59.638311] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:17.183 [2024-12-05 12:50:59.638331] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:17.183 [2024-12-05 12:50:59.638578] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:17.183 BaseBdev2 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.183 [ 00:19:17.183 { 00:19:17.183 "name": "BaseBdev2", 00:19:17.183 "aliases": [ 00:19:17.183 "d4bcbac7-538f-4038-b011-cbfbbb2098fb" 00:19:17.183 ], 00:19:17.183 "product_name": "Malloc disk", 00:19:17.183 "block_size": 512, 00:19:17.183 "num_blocks": 65536, 00:19:17.183 "uuid": "d4bcbac7-538f-4038-b011-cbfbbb2098fb", 00:19:17.183 "assigned_rate_limits": { 00:19:17.183 "rw_ios_per_sec": 0, 00:19:17.183 "rw_mbytes_per_sec": 0, 00:19:17.183 "r_mbytes_per_sec": 0, 00:19:17.183 "w_mbytes_per_sec": 0 00:19:17.183 }, 00:19:17.183 "claimed": true, 00:19:17.183 "claim_type": "exclusive_write", 00:19:17.183 "zoned": false, 00:19:17.183 "supported_io_types": { 00:19:17.183 "read": true, 00:19:17.183 "write": true, 00:19:17.183 "unmap": true, 00:19:17.183 "flush": true, 00:19:17.183 "reset": true, 00:19:17.183 "nvme_admin": false, 00:19:17.183 "nvme_io": false, 00:19:17.183 "nvme_io_md": false, 00:19:17.183 "write_zeroes": true, 00:19:17.183 "zcopy": true, 00:19:17.183 "get_zone_info": false, 00:19:17.183 "zone_management": false, 00:19:17.183 "zone_append": false, 00:19:17.183 "compare": false, 00:19:17.183 "compare_and_write": false, 00:19:17.183 "abort": true, 00:19:17.183 "seek_hole": false, 00:19:17.183 "seek_data": false, 00:19:17.183 "copy": true, 00:19:17.183 "nvme_iov_md": false 00:19:17.183 }, 00:19:17.183 "memory_domains": [ 00:19:17.183 { 00:19:17.183 "dma_device_id": "system", 00:19:17.183 "dma_device_type": 1 00:19:17.183 }, 00:19:17.183 { 00:19:17.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:17.183 "dma_device_type": 2 00:19:17.183 } 00:19:17.183 ], 00:19:17.183 "driver_specific": {} 00:19:17.183 } 00:19:17.183 ] 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.183 "name": "Existed_Raid", 00:19:17.183 "uuid": "4a137302-cfd4-4417-ab72-c7e23d39f241", 00:19:17.183 "strip_size_kb": 64, 00:19:17.183 "state": "online", 00:19:17.183 "raid_level": "raid0", 00:19:17.183 "superblock": false, 00:19:17.183 "num_base_bdevs": 2, 00:19:17.183 "num_base_bdevs_discovered": 2, 00:19:17.183 "num_base_bdevs_operational": 2, 00:19:17.183 "base_bdevs_list": [ 00:19:17.183 { 00:19:17.183 "name": "BaseBdev1", 00:19:17.183 "uuid": "37ce8b82-1cf1-453c-a920-6c227b4e2e30", 00:19:17.183 "is_configured": true, 00:19:17.183 "data_offset": 0, 00:19:17.183 "data_size": 65536 00:19:17.183 }, 00:19:17.183 { 00:19:17.183 "name": "BaseBdev2", 00:19:17.183 "uuid": "d4bcbac7-538f-4038-b011-cbfbbb2098fb", 00:19:17.183 "is_configured": true, 00:19:17.183 "data_offset": 0, 00:19:17.183 "data_size": 65536 00:19:17.183 } 00:19:17.183 ] 00:19:17.183 }' 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.183 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.446 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:17.446 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:17.446 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:17.446 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:17.446 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:17.446 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:17.446 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:17.446 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:17.446 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.446 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.446 [2024-12-05 12:50:59.978263] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:17.446 12:50:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.446 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:17.446 "name": "Existed_Raid", 00:19:17.446 "aliases": [ 00:19:17.446 "4a137302-cfd4-4417-ab72-c7e23d39f241" 00:19:17.446 ], 00:19:17.446 "product_name": "Raid Volume", 00:19:17.446 "block_size": 512, 00:19:17.446 "num_blocks": 131072, 00:19:17.446 "uuid": "4a137302-cfd4-4417-ab72-c7e23d39f241", 00:19:17.446 "assigned_rate_limits": { 00:19:17.446 "rw_ios_per_sec": 0, 00:19:17.446 "rw_mbytes_per_sec": 0, 00:19:17.446 "r_mbytes_per_sec": 0, 00:19:17.446 "w_mbytes_per_sec": 0 00:19:17.446 }, 00:19:17.446 "claimed": false, 00:19:17.446 "zoned": false, 00:19:17.446 "supported_io_types": { 00:19:17.446 "read": true, 00:19:17.446 "write": true, 00:19:17.446 "unmap": true, 00:19:17.446 "flush": true, 00:19:17.446 "reset": true, 00:19:17.446 "nvme_admin": false, 00:19:17.446 "nvme_io": false, 00:19:17.446 "nvme_io_md": false, 00:19:17.446 "write_zeroes": true, 00:19:17.446 "zcopy": false, 00:19:17.446 "get_zone_info": false, 00:19:17.446 "zone_management": false, 00:19:17.446 "zone_append": false, 00:19:17.446 "compare": false, 00:19:17.446 "compare_and_write": false, 00:19:17.446 "abort": false, 00:19:17.446 "seek_hole": false, 00:19:17.446 "seek_data": false, 00:19:17.446 "copy": false, 00:19:17.446 "nvme_iov_md": false 00:19:17.446 }, 00:19:17.446 "memory_domains": [ 00:19:17.446 { 00:19:17.446 "dma_device_id": "system", 00:19:17.446 "dma_device_type": 1 00:19:17.446 }, 00:19:17.446 { 00:19:17.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:17.446 "dma_device_type": 2 00:19:17.446 }, 00:19:17.446 { 00:19:17.446 "dma_device_id": "system", 00:19:17.446 "dma_device_type": 1 00:19:17.446 }, 00:19:17.446 { 00:19:17.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:17.446 "dma_device_type": 2 00:19:17.446 } 00:19:17.446 ], 00:19:17.446 "driver_specific": { 00:19:17.446 "raid": { 00:19:17.446 "uuid": "4a137302-cfd4-4417-ab72-c7e23d39f241", 00:19:17.446 "strip_size_kb": 64, 00:19:17.446 "state": "online", 00:19:17.446 "raid_level": "raid0", 00:19:17.446 "superblock": false, 00:19:17.446 "num_base_bdevs": 2, 00:19:17.446 "num_base_bdevs_discovered": 2, 00:19:17.446 "num_base_bdevs_operational": 2, 00:19:17.446 "base_bdevs_list": [ 00:19:17.446 { 00:19:17.446 "name": "BaseBdev1", 00:19:17.446 "uuid": "37ce8b82-1cf1-453c-a920-6c227b4e2e30", 00:19:17.446 "is_configured": true, 00:19:17.446 "data_offset": 0, 00:19:17.446 "data_size": 65536 00:19:17.446 }, 00:19:17.446 { 00:19:17.446 "name": "BaseBdev2", 00:19:17.446 "uuid": "d4bcbac7-538f-4038-b011-cbfbbb2098fb", 00:19:17.446 "is_configured": true, 00:19:17.446 "data_offset": 0, 00:19:17.446 "data_size": 65536 00:19:17.446 } 00:19:17.446 ] 00:19:17.446 } 00:19:17.446 } 00:19:17.446 }' 00:19:17.446 12:50:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:17.446 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:17.446 BaseBdev2' 00:19:17.446 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.707 [2024-12-05 12:51:00.134052] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:17.707 [2024-12-05 12:51:00.134088] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:17.707 [2024-12-05 12:51:00.134139] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:17.707 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.708 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.708 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.708 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.708 "name": "Existed_Raid", 00:19:17.708 "uuid": "4a137302-cfd4-4417-ab72-c7e23d39f241", 00:19:17.708 "strip_size_kb": 64, 00:19:17.708 "state": "offline", 00:19:17.708 "raid_level": "raid0", 00:19:17.708 "superblock": false, 00:19:17.708 "num_base_bdevs": 2, 00:19:17.708 "num_base_bdevs_discovered": 1, 00:19:17.708 "num_base_bdevs_operational": 1, 00:19:17.708 "base_bdevs_list": [ 00:19:17.708 { 00:19:17.708 "name": null, 00:19:17.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.708 "is_configured": false, 00:19:17.708 "data_offset": 0, 00:19:17.708 "data_size": 65536 00:19:17.708 }, 00:19:17.708 { 00:19:17.708 "name": "BaseBdev2", 00:19:17.708 "uuid": "d4bcbac7-538f-4038-b011-cbfbbb2098fb", 00:19:17.708 "is_configured": true, 00:19:17.708 "data_offset": 0, 00:19:17.708 "data_size": 65536 00:19:17.708 } 00:19:17.708 ] 00:19:17.708 }' 00:19:17.708 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.708 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.969 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:17.969 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:17.969 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.969 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.969 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.969 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:17.969 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.969 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:17.969 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:17.969 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:17.969 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.969 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.969 [2024-12-05 12:51:00.532474] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:17.969 [2024-12-05 12:51:00.532535] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:18.236 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.236 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:18.236 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:18.236 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.236 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:18.236 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.236 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.236 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.236 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:18.236 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:18.236 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:18.236 12:51:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 59335 00:19:18.236 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 59335 ']' 00:19:18.236 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 59335 00:19:18.236 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:19:18.236 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:18.237 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59335 00:19:18.237 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:18.237 killing process with pid 59335 00:19:18.237 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:18.237 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59335' 00:19:18.237 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 59335 00:19:18.237 [2024-12-05 12:51:00.652534] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:18.237 12:51:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 59335 00:19:18.237 [2024-12-05 12:51:00.663018] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:18.808 12:51:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:19:18.808 00:19:18.808 real 0m3.673s 00:19:18.808 user 0m5.298s 00:19:18.808 sys 0m0.543s 00:19:18.808 12:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:18.808 12:51:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.808 ************************************ 00:19:18.808 END TEST raid_state_function_test 00:19:18.808 ************************************ 00:19:19.070 12:51:01 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:19:19.070 12:51:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:19.070 12:51:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:19.070 12:51:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:19.070 ************************************ 00:19:19.070 START TEST raid_state_function_test_sb 00:19:19.070 ************************************ 00:19:19.070 12:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:19:19.070 12:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:19:19.070 12:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:19.070 12:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:19.070 12:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:19.070 12:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:19.070 12:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:19.070 12:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:19.070 12:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:19.070 12:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:19.070 12:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:19.070 12:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:19.070 12:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:19.070 12:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:19.070 12:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:19.070 12:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:19.070 12:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:19.070 12:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:19.070 12:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:19.070 12:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:19:19.070 12:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:19.070 12:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:19.070 12:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:19.070 12:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:19.071 12:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=59577 00:19:19.071 12:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 59577' 00:19:19.071 Process raid pid: 59577 00:19:19.071 12:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 59577 00:19:19.071 12:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 59577 ']' 00:19:19.071 12:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.071 12:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:19.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.071 12:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.071 12:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:19.071 12:51:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:19.071 12:51:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.071 [2024-12-05 12:51:01.494871] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:19:19.071 [2024-12-05 12:51:01.494998] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:19.332 [2024-12-05 12:51:01.654323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.332 [2024-12-05 12:51:01.755037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.332 [2024-12-05 12:51:01.892340] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:19.332 [2024-12-05 12:51:01.892382] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:19.904 12:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:19.904 12:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:19:19.904 12:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:19.904 12:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.904 12:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.904 [2024-12-05 12:51:02.359606] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:19.904 [2024-12-05 12:51:02.359661] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:19.904 [2024-12-05 12:51:02.359675] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:19.904 [2024-12-05 12:51:02.359684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:19.904 12:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.904 12:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:19:19.904 12:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:19.904 12:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:19.904 12:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:19.904 12:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:19.904 12:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:19.904 12:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.904 12:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.904 12:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.904 12:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.904 12:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.904 12:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.904 12:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.904 12:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:19.904 12:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.904 12:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.904 "name": "Existed_Raid", 00:19:19.904 "uuid": "f8b09e3b-62ac-4677-a6e1-44fdaed5a39a", 00:19:19.904 "strip_size_kb": 64, 00:19:19.904 "state": "configuring", 00:19:19.904 "raid_level": "raid0", 00:19:19.904 "superblock": true, 00:19:19.904 "num_base_bdevs": 2, 00:19:19.904 "num_base_bdevs_discovered": 0, 00:19:19.904 "num_base_bdevs_operational": 2, 00:19:19.904 "base_bdevs_list": [ 00:19:19.904 { 00:19:19.904 "name": "BaseBdev1", 00:19:19.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.904 "is_configured": false, 00:19:19.904 "data_offset": 0, 00:19:19.904 "data_size": 0 00:19:19.904 }, 00:19:19.904 { 00:19:19.904 "name": "BaseBdev2", 00:19:19.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.904 "is_configured": false, 00:19:19.904 "data_offset": 0, 00:19:19.904 "data_size": 0 00:19:19.904 } 00:19:19.904 ] 00:19:19.904 }' 00:19:19.905 12:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.905 12:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.165 12:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:20.165 12:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.165 12:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.165 [2024-12-05 12:51:02.671619] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:20.165 [2024-12-05 12:51:02.671653] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:20.165 12:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.165 12:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:20.165 12:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.165 12:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.165 [2024-12-05 12:51:02.679630] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:20.165 [2024-12-05 12:51:02.679667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:20.165 [2024-12-05 12:51:02.679675] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:20.165 [2024-12-05 12:51:02.679687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:20.165 12:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.165 12:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:20.165 12:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.165 12:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.165 [2024-12-05 12:51:02.711912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:20.165 BaseBdev1 00:19:20.165 12:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.165 12:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:20.165 12:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:20.165 12:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:20.165 12:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:20.165 12:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:20.165 12:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:20.165 12:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:20.165 12:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.165 12:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.165 12:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.165 12:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:20.165 12:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.165 12:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.165 [ 00:19:20.165 { 00:19:20.165 "name": "BaseBdev1", 00:19:20.165 "aliases": [ 00:19:20.165 "e3502c37-85d4-4f14-9bc6-b290fcc62bd6" 00:19:20.165 ], 00:19:20.165 "product_name": "Malloc disk", 00:19:20.165 "block_size": 512, 00:19:20.165 "num_blocks": 65536, 00:19:20.165 "uuid": "e3502c37-85d4-4f14-9bc6-b290fcc62bd6", 00:19:20.165 "assigned_rate_limits": { 00:19:20.165 "rw_ios_per_sec": 0, 00:19:20.165 "rw_mbytes_per_sec": 0, 00:19:20.165 "r_mbytes_per_sec": 0, 00:19:20.165 "w_mbytes_per_sec": 0 00:19:20.165 }, 00:19:20.165 "claimed": true, 00:19:20.165 "claim_type": "exclusive_write", 00:19:20.165 "zoned": false, 00:19:20.165 "supported_io_types": { 00:19:20.165 "read": true, 00:19:20.165 "write": true, 00:19:20.165 "unmap": true, 00:19:20.165 "flush": true, 00:19:20.165 "reset": true, 00:19:20.165 "nvme_admin": false, 00:19:20.165 "nvme_io": false, 00:19:20.165 "nvme_io_md": false, 00:19:20.165 "write_zeroes": true, 00:19:20.165 "zcopy": true, 00:19:20.165 "get_zone_info": false, 00:19:20.165 "zone_management": false, 00:19:20.165 "zone_append": false, 00:19:20.165 "compare": false, 00:19:20.165 "compare_and_write": false, 00:19:20.165 "abort": true, 00:19:20.165 "seek_hole": false, 00:19:20.165 "seek_data": false, 00:19:20.165 "copy": true, 00:19:20.165 "nvme_iov_md": false 00:19:20.165 }, 00:19:20.165 "memory_domains": [ 00:19:20.165 { 00:19:20.165 "dma_device_id": "system", 00:19:20.165 "dma_device_type": 1 00:19:20.165 }, 00:19:20.165 { 00:19:20.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:20.165 "dma_device_type": 2 00:19:20.165 } 00:19:20.165 ], 00:19:20.165 "driver_specific": {} 00:19:20.165 } 00:19:20.165 ] 00:19:20.165 12:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.165 12:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:20.165 12:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:19:20.165 12:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:20.166 12:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:20.166 12:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:20.166 12:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:20.166 12:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:20.166 12:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.166 12:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.166 12:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.166 12:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.166 12:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.166 12:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.166 12:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.166 12:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:20.425 12:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.425 12:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.425 "name": "Existed_Raid", 00:19:20.425 "uuid": "03f0ae66-e521-4757-8d7e-f57610c3282b", 00:19:20.425 "strip_size_kb": 64, 00:19:20.425 "state": "configuring", 00:19:20.425 "raid_level": "raid0", 00:19:20.425 "superblock": true, 00:19:20.425 "num_base_bdevs": 2, 00:19:20.425 "num_base_bdevs_discovered": 1, 00:19:20.425 "num_base_bdevs_operational": 2, 00:19:20.425 "base_bdevs_list": [ 00:19:20.425 { 00:19:20.425 "name": "BaseBdev1", 00:19:20.425 "uuid": "e3502c37-85d4-4f14-9bc6-b290fcc62bd6", 00:19:20.425 "is_configured": true, 00:19:20.425 "data_offset": 2048, 00:19:20.425 "data_size": 63488 00:19:20.425 }, 00:19:20.425 { 00:19:20.425 "name": "BaseBdev2", 00:19:20.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.425 "is_configured": false, 00:19:20.425 "data_offset": 0, 00:19:20.425 "data_size": 0 00:19:20.425 } 00:19:20.425 ] 00:19:20.425 }' 00:19:20.425 12:51:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.425 12:51:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.688 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:20.688 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.688 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.688 [2024-12-05 12:51:03.052035] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:20.688 [2024-12-05 12:51:03.052080] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:20.688 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.688 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:20.688 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.688 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.688 [2024-12-05 12:51:03.060079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:20.688 [2024-12-05 12:51:03.061964] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:20.688 [2024-12-05 12:51:03.062007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:20.688 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.688 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:20.688 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:20.688 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:19:20.688 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:20.688 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:20.688 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:20.688 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:20.688 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:20.688 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.688 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.688 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.688 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.688 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.688 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:20.688 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.688 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.688 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.688 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.688 "name": "Existed_Raid", 00:19:20.688 "uuid": "9e7a1d2d-4ef8-4fe0-b529-ea453d4f09e3", 00:19:20.688 "strip_size_kb": 64, 00:19:20.688 "state": "configuring", 00:19:20.688 "raid_level": "raid0", 00:19:20.688 "superblock": true, 00:19:20.688 "num_base_bdevs": 2, 00:19:20.688 "num_base_bdevs_discovered": 1, 00:19:20.688 "num_base_bdevs_operational": 2, 00:19:20.688 "base_bdevs_list": [ 00:19:20.688 { 00:19:20.688 "name": "BaseBdev1", 00:19:20.688 "uuid": "e3502c37-85d4-4f14-9bc6-b290fcc62bd6", 00:19:20.688 "is_configured": true, 00:19:20.688 "data_offset": 2048, 00:19:20.688 "data_size": 63488 00:19:20.688 }, 00:19:20.688 { 00:19:20.688 "name": "BaseBdev2", 00:19:20.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.688 "is_configured": false, 00:19:20.688 "data_offset": 0, 00:19:20.688 "data_size": 0 00:19:20.688 } 00:19:20.688 ] 00:19:20.688 }' 00:19:20.688 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.688 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.949 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:20.949 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.949 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.949 [2024-12-05 12:51:03.407101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:20.949 [2024-12-05 12:51:03.407346] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:20.949 [2024-12-05 12:51:03.407359] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:20.949 BaseBdev2 00:19:20.949 [2024-12-05 12:51:03.407635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:20.949 [2024-12-05 12:51:03.407781] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:20.949 [2024-12-05 12:51:03.407799] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:20.949 [2024-12-05 12:51:03.407923] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:20.949 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.949 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:20.950 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:20.950 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:20.950 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:20.950 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:20.950 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:20.950 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:20.950 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.950 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.950 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.950 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:20.950 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.950 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.950 [ 00:19:20.950 { 00:19:20.950 "name": "BaseBdev2", 00:19:20.950 "aliases": [ 00:19:20.950 "c7d14ce4-2170-413b-9335-fd1a4298080c" 00:19:20.950 ], 00:19:20.950 "product_name": "Malloc disk", 00:19:20.950 "block_size": 512, 00:19:20.950 "num_blocks": 65536, 00:19:20.950 "uuid": "c7d14ce4-2170-413b-9335-fd1a4298080c", 00:19:20.950 "assigned_rate_limits": { 00:19:20.950 "rw_ios_per_sec": 0, 00:19:20.950 "rw_mbytes_per_sec": 0, 00:19:20.950 "r_mbytes_per_sec": 0, 00:19:20.950 "w_mbytes_per_sec": 0 00:19:20.950 }, 00:19:20.950 "claimed": true, 00:19:20.950 "claim_type": "exclusive_write", 00:19:20.950 "zoned": false, 00:19:20.950 "supported_io_types": { 00:19:20.950 "read": true, 00:19:20.950 "write": true, 00:19:20.950 "unmap": true, 00:19:20.950 "flush": true, 00:19:20.950 "reset": true, 00:19:20.950 "nvme_admin": false, 00:19:20.950 "nvme_io": false, 00:19:20.950 "nvme_io_md": false, 00:19:20.950 "write_zeroes": true, 00:19:20.950 "zcopy": true, 00:19:20.950 "get_zone_info": false, 00:19:20.950 "zone_management": false, 00:19:20.950 "zone_append": false, 00:19:20.950 "compare": false, 00:19:20.950 "compare_and_write": false, 00:19:20.950 "abort": true, 00:19:20.950 "seek_hole": false, 00:19:20.950 "seek_data": false, 00:19:20.950 "copy": true, 00:19:20.950 "nvme_iov_md": false 00:19:20.950 }, 00:19:20.950 "memory_domains": [ 00:19:20.950 { 00:19:20.950 "dma_device_id": "system", 00:19:20.950 "dma_device_type": 1 00:19:20.950 }, 00:19:20.950 { 00:19:20.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:20.950 "dma_device_type": 2 00:19:20.950 } 00:19:20.950 ], 00:19:20.950 "driver_specific": {} 00:19:20.950 } 00:19:20.950 ] 00:19:20.950 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.950 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:20.950 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:20.950 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:20.950 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:19:20.950 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:20.950 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:20.950 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:20.950 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:20.950 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:20.950 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.950 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.950 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.950 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.950 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.950 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.950 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.950 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:20.950 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.950 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.950 "name": "Existed_Raid", 00:19:20.950 "uuid": "9e7a1d2d-4ef8-4fe0-b529-ea453d4f09e3", 00:19:20.950 "strip_size_kb": 64, 00:19:20.950 "state": "online", 00:19:20.950 "raid_level": "raid0", 00:19:20.950 "superblock": true, 00:19:20.950 "num_base_bdevs": 2, 00:19:20.950 "num_base_bdevs_discovered": 2, 00:19:20.950 "num_base_bdevs_operational": 2, 00:19:20.950 "base_bdevs_list": [ 00:19:20.950 { 00:19:20.950 "name": "BaseBdev1", 00:19:20.950 "uuid": "e3502c37-85d4-4f14-9bc6-b290fcc62bd6", 00:19:20.950 "is_configured": true, 00:19:20.950 "data_offset": 2048, 00:19:20.950 "data_size": 63488 00:19:20.950 }, 00:19:20.950 { 00:19:20.950 "name": "BaseBdev2", 00:19:20.950 "uuid": "c7d14ce4-2170-413b-9335-fd1a4298080c", 00:19:20.950 "is_configured": true, 00:19:20.950 "data_offset": 2048, 00:19:20.950 "data_size": 63488 00:19:20.950 } 00:19:20.950 ] 00:19:20.950 }' 00:19:20.950 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.950 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.213 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:21.213 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:21.213 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:21.213 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:21.213 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:21.213 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:21.213 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:21.213 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:21.213 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.213 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.213 [2024-12-05 12:51:03.771538] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:21.213 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.213 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:21.213 "name": "Existed_Raid", 00:19:21.213 "aliases": [ 00:19:21.213 "9e7a1d2d-4ef8-4fe0-b529-ea453d4f09e3" 00:19:21.213 ], 00:19:21.213 "product_name": "Raid Volume", 00:19:21.213 "block_size": 512, 00:19:21.213 "num_blocks": 126976, 00:19:21.213 "uuid": "9e7a1d2d-4ef8-4fe0-b529-ea453d4f09e3", 00:19:21.213 "assigned_rate_limits": { 00:19:21.213 "rw_ios_per_sec": 0, 00:19:21.213 "rw_mbytes_per_sec": 0, 00:19:21.213 "r_mbytes_per_sec": 0, 00:19:21.213 "w_mbytes_per_sec": 0 00:19:21.213 }, 00:19:21.213 "claimed": false, 00:19:21.213 "zoned": false, 00:19:21.213 "supported_io_types": { 00:19:21.213 "read": true, 00:19:21.213 "write": true, 00:19:21.213 "unmap": true, 00:19:21.213 "flush": true, 00:19:21.213 "reset": true, 00:19:21.213 "nvme_admin": false, 00:19:21.213 "nvme_io": false, 00:19:21.213 "nvme_io_md": false, 00:19:21.213 "write_zeroes": true, 00:19:21.213 "zcopy": false, 00:19:21.213 "get_zone_info": false, 00:19:21.213 "zone_management": false, 00:19:21.213 "zone_append": false, 00:19:21.213 "compare": false, 00:19:21.213 "compare_and_write": false, 00:19:21.213 "abort": false, 00:19:21.213 "seek_hole": false, 00:19:21.213 "seek_data": false, 00:19:21.213 "copy": false, 00:19:21.213 "nvme_iov_md": false 00:19:21.213 }, 00:19:21.213 "memory_domains": [ 00:19:21.213 { 00:19:21.213 "dma_device_id": "system", 00:19:21.213 "dma_device_type": 1 00:19:21.213 }, 00:19:21.213 { 00:19:21.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:21.213 "dma_device_type": 2 00:19:21.213 }, 00:19:21.213 { 00:19:21.213 "dma_device_id": "system", 00:19:21.213 "dma_device_type": 1 00:19:21.213 }, 00:19:21.213 { 00:19:21.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:21.213 "dma_device_type": 2 00:19:21.213 } 00:19:21.213 ], 00:19:21.213 "driver_specific": { 00:19:21.213 "raid": { 00:19:21.213 "uuid": "9e7a1d2d-4ef8-4fe0-b529-ea453d4f09e3", 00:19:21.213 "strip_size_kb": 64, 00:19:21.213 "state": "online", 00:19:21.213 "raid_level": "raid0", 00:19:21.213 "superblock": true, 00:19:21.213 "num_base_bdevs": 2, 00:19:21.213 "num_base_bdevs_discovered": 2, 00:19:21.213 "num_base_bdevs_operational": 2, 00:19:21.213 "base_bdevs_list": [ 00:19:21.213 { 00:19:21.213 "name": "BaseBdev1", 00:19:21.213 "uuid": "e3502c37-85d4-4f14-9bc6-b290fcc62bd6", 00:19:21.213 "is_configured": true, 00:19:21.213 "data_offset": 2048, 00:19:21.213 "data_size": 63488 00:19:21.213 }, 00:19:21.213 { 00:19:21.213 "name": "BaseBdev2", 00:19:21.213 "uuid": "c7d14ce4-2170-413b-9335-fd1a4298080c", 00:19:21.213 "is_configured": true, 00:19:21.213 "data_offset": 2048, 00:19:21.213 "data_size": 63488 00:19:21.213 } 00:19:21.213 ] 00:19:21.213 } 00:19:21.213 } 00:19:21.213 }' 00:19:21.476 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:21.476 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:21.476 BaseBdev2' 00:19:21.476 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:21.476 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:21.476 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:21.476 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:21.476 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:21.476 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.476 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.476 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.476 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:21.476 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:21.476 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:21.476 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:21.476 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.476 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.476 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:21.476 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.476 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:21.476 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:21.476 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:21.476 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.476 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.476 [2024-12-05 12:51:03.931309] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:21.476 [2024-12-05 12:51:03.931340] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:21.476 [2024-12-05 12:51:03.931385] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:21.476 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.476 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:21.476 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:19:21.476 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:21.476 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:19:21.476 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:19:21.476 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:19:21.477 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:21.477 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:19:21.477 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:21.477 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:21.477 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:21.477 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.477 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.477 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.477 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.477 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.477 12:51:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:21.477 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.477 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.477 12:51:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.477 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.477 "name": "Existed_Raid", 00:19:21.477 "uuid": "9e7a1d2d-4ef8-4fe0-b529-ea453d4f09e3", 00:19:21.477 "strip_size_kb": 64, 00:19:21.477 "state": "offline", 00:19:21.477 "raid_level": "raid0", 00:19:21.477 "superblock": true, 00:19:21.477 "num_base_bdevs": 2, 00:19:21.477 "num_base_bdevs_discovered": 1, 00:19:21.477 "num_base_bdevs_operational": 1, 00:19:21.477 "base_bdevs_list": [ 00:19:21.477 { 00:19:21.477 "name": null, 00:19:21.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.477 "is_configured": false, 00:19:21.477 "data_offset": 0, 00:19:21.477 "data_size": 63488 00:19:21.477 }, 00:19:21.477 { 00:19:21.477 "name": "BaseBdev2", 00:19:21.477 "uuid": "c7d14ce4-2170-413b-9335-fd1a4298080c", 00:19:21.477 "is_configured": true, 00:19:21.477 "data_offset": 2048, 00:19:21.477 "data_size": 63488 00:19:21.477 } 00:19:21.477 ] 00:19:21.477 }' 00:19:21.477 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.477 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.739 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:21.739 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:21.739 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.739 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.739 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:21.739 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.739 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.000 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:22.000 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:22.000 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:22.000 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.000 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.000 [2024-12-05 12:51:04.334812] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:22.000 [2024-12-05 12:51:04.334861] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:22.000 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.000 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:22.000 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:22.000 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.000 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.000 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:22.000 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.000 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.000 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:22.000 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:22.000 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:22.000 12:51:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 59577 00:19:22.000 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 59577 ']' 00:19:22.000 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 59577 00:19:22.000 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:19:22.000 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:22.000 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59577 00:19:22.000 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:22.000 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:22.000 killing process with pid 59577 00:19:22.000 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59577' 00:19:22.000 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 59577 00:19:22.000 [2024-12-05 12:51:04.442016] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:22.000 12:51:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 59577 00:19:22.000 [2024-12-05 12:51:04.450502] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:22.573 12:51:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:19:22.573 00:19:22.573 real 0m3.605s 00:19:22.573 user 0m5.310s 00:19:22.573 sys 0m0.548s 00:19:22.573 12:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:22.573 12:51:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.573 ************************************ 00:19:22.573 END TEST raid_state_function_test_sb 00:19:22.573 ************************************ 00:19:22.573 12:51:05 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:19:22.573 12:51:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:22.573 12:51:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:22.573 12:51:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:22.573 ************************************ 00:19:22.573 START TEST raid_superblock_test 00:19:22.573 ************************************ 00:19:22.573 12:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:19:22.573 12:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:19:22.573 12:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:22.573 12:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:22.573 12:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:22.573 12:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:22.573 12:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:22.573 12:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:22.573 12:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:22.573 12:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:22.573 12:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:22.573 12:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:22.573 12:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:22.573 12:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:22.573 12:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:19:22.573 12:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:19:22.573 12:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:19:22.574 12:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=59813 00:19:22.574 12:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 59813 00:19:22.574 12:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 59813 ']' 00:19:22.574 12:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.574 12:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:22.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.574 12:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.574 12:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:22.574 12:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.574 12:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:22.574 [2024-12-05 12:51:05.132979] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:19:22.574 [2024-12-05 12:51:05.133110] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59813 ] 00:19:22.834 [2024-12-05 12:51:05.287077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.834 [2024-12-05 12:51:05.373874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.095 [2024-12-05 12:51:05.486594] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:23.095 [2024-12-05 12:51:05.486639] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:23.667 12:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:23.667 12:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:19:23.667 12:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:23.667 12:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:23.667 12:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:23.667 12:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:23.667 12:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:23.667 12:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:23.667 12:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:23.667 12:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:23.667 12:51:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:19:23.667 12:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.667 12:51:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.667 malloc1 00:19:23.667 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.667 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:23.667 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.667 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.667 [2024-12-05 12:51:06.011621] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:23.667 [2024-12-05 12:51:06.011675] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.667 [2024-12-05 12:51:06.011694] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:23.667 [2024-12-05 12:51:06.011702] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.667 [2024-12-05 12:51:06.013543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.667 [2024-12-05 12:51:06.013577] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:23.667 pt1 00:19:23.667 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.667 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:23.667 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:23.667 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:23.667 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:23.667 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:23.667 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:23.667 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:23.667 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:23.667 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:19:23.667 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.667 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.667 malloc2 00:19:23.667 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.667 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:23.667 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.667 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.667 [2024-12-05 12:51:06.047974] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:23.667 [2024-12-05 12:51:06.048022] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.667 [2024-12-05 12:51:06.048042] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:23.667 [2024-12-05 12:51:06.048050] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.667 [2024-12-05 12:51:06.049882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.667 [2024-12-05 12:51:06.049913] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:23.667 pt2 00:19:23.667 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.667 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:23.667 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:23.667 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:23.667 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.667 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.667 [2024-12-05 12:51:06.060041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:23.667 [2024-12-05 12:51:06.061624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:23.667 [2024-12-05 12:51:06.061768] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:23.667 [2024-12-05 12:51:06.061777] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:23.667 [2024-12-05 12:51:06.062015] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:23.667 [2024-12-05 12:51:06.062131] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:23.667 [2024-12-05 12:51:06.062145] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:23.668 [2024-12-05 12:51:06.062278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:23.668 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.668 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:19:23.668 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:23.668 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:23.668 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:23.668 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:23.668 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:23.668 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.668 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.668 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.668 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.668 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.668 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.668 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.668 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.668 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.668 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.668 "name": "raid_bdev1", 00:19:23.668 "uuid": "eedb3d01-4bef-4980-a24e-76d728cdfad2", 00:19:23.668 "strip_size_kb": 64, 00:19:23.668 "state": "online", 00:19:23.668 "raid_level": "raid0", 00:19:23.668 "superblock": true, 00:19:23.668 "num_base_bdevs": 2, 00:19:23.668 "num_base_bdevs_discovered": 2, 00:19:23.668 "num_base_bdevs_operational": 2, 00:19:23.668 "base_bdevs_list": [ 00:19:23.668 { 00:19:23.668 "name": "pt1", 00:19:23.668 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:23.668 "is_configured": true, 00:19:23.668 "data_offset": 2048, 00:19:23.668 "data_size": 63488 00:19:23.668 }, 00:19:23.668 { 00:19:23.668 "name": "pt2", 00:19:23.668 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:23.668 "is_configured": true, 00:19:23.668 "data_offset": 2048, 00:19:23.668 "data_size": 63488 00:19:23.668 } 00:19:23.668 ] 00:19:23.668 }' 00:19:23.668 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.668 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.928 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:23.928 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:23.928 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:23.928 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:23.928 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:23.928 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:23.928 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:23.928 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:23.928 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.928 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.928 [2024-12-05 12:51:06.388295] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:23.928 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.928 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:23.928 "name": "raid_bdev1", 00:19:23.928 "aliases": [ 00:19:23.928 "eedb3d01-4bef-4980-a24e-76d728cdfad2" 00:19:23.928 ], 00:19:23.928 "product_name": "Raid Volume", 00:19:23.928 "block_size": 512, 00:19:23.928 "num_blocks": 126976, 00:19:23.928 "uuid": "eedb3d01-4bef-4980-a24e-76d728cdfad2", 00:19:23.928 "assigned_rate_limits": { 00:19:23.928 "rw_ios_per_sec": 0, 00:19:23.928 "rw_mbytes_per_sec": 0, 00:19:23.928 "r_mbytes_per_sec": 0, 00:19:23.928 "w_mbytes_per_sec": 0 00:19:23.928 }, 00:19:23.928 "claimed": false, 00:19:23.928 "zoned": false, 00:19:23.928 "supported_io_types": { 00:19:23.928 "read": true, 00:19:23.928 "write": true, 00:19:23.928 "unmap": true, 00:19:23.928 "flush": true, 00:19:23.928 "reset": true, 00:19:23.928 "nvme_admin": false, 00:19:23.928 "nvme_io": false, 00:19:23.928 "nvme_io_md": false, 00:19:23.928 "write_zeroes": true, 00:19:23.928 "zcopy": false, 00:19:23.928 "get_zone_info": false, 00:19:23.928 "zone_management": false, 00:19:23.928 "zone_append": false, 00:19:23.928 "compare": false, 00:19:23.928 "compare_and_write": false, 00:19:23.928 "abort": false, 00:19:23.928 "seek_hole": false, 00:19:23.928 "seek_data": false, 00:19:23.928 "copy": false, 00:19:23.928 "nvme_iov_md": false 00:19:23.928 }, 00:19:23.928 "memory_domains": [ 00:19:23.928 { 00:19:23.928 "dma_device_id": "system", 00:19:23.928 "dma_device_type": 1 00:19:23.928 }, 00:19:23.928 { 00:19:23.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:23.928 "dma_device_type": 2 00:19:23.928 }, 00:19:23.928 { 00:19:23.928 "dma_device_id": "system", 00:19:23.928 "dma_device_type": 1 00:19:23.928 }, 00:19:23.928 { 00:19:23.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:23.928 "dma_device_type": 2 00:19:23.928 } 00:19:23.928 ], 00:19:23.928 "driver_specific": { 00:19:23.928 "raid": { 00:19:23.928 "uuid": "eedb3d01-4bef-4980-a24e-76d728cdfad2", 00:19:23.928 "strip_size_kb": 64, 00:19:23.928 "state": "online", 00:19:23.928 "raid_level": "raid0", 00:19:23.928 "superblock": true, 00:19:23.928 "num_base_bdevs": 2, 00:19:23.928 "num_base_bdevs_discovered": 2, 00:19:23.928 "num_base_bdevs_operational": 2, 00:19:23.928 "base_bdevs_list": [ 00:19:23.928 { 00:19:23.928 "name": "pt1", 00:19:23.928 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:23.928 "is_configured": true, 00:19:23.928 "data_offset": 2048, 00:19:23.928 "data_size": 63488 00:19:23.928 }, 00:19:23.928 { 00:19:23.928 "name": "pt2", 00:19:23.928 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:23.928 "is_configured": true, 00:19:23.928 "data_offset": 2048, 00:19:23.928 "data_size": 63488 00:19:23.928 } 00:19:23.928 ] 00:19:23.928 } 00:19:23.928 } 00:19:23.928 }' 00:19:23.928 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:23.928 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:23.928 pt2' 00:19:23.928 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:23.928 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:23.928 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:23.928 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:23.928 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:23.928 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.928 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.928 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.928 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:23.928 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:23.928 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:23.928 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:23.928 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:23.928 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.928 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.189 [2024-12-05 12:51:06.556327] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=eedb3d01-4bef-4980-a24e-76d728cdfad2 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z eedb3d01-4bef-4980-a24e-76d728cdfad2 ']' 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.189 [2024-12-05 12:51:06.584066] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:24.189 [2024-12-05 12:51:06.584091] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:24.189 [2024-12-05 12:51:06.584158] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:24.189 [2024-12-05 12:51:06.584198] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:24.189 [2024-12-05 12:51:06.584208] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.189 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.189 [2024-12-05 12:51:06.680114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:24.189 [2024-12-05 12:51:06.681705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:24.189 [2024-12-05 12:51:06.681759] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:24.189 [2024-12-05 12:51:06.681801] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:24.189 [2024-12-05 12:51:06.681813] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:24.189 [2024-12-05 12:51:06.681823] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:24.189 request: 00:19:24.189 { 00:19:24.189 "name": "raid_bdev1", 00:19:24.189 "raid_level": "raid0", 00:19:24.189 "base_bdevs": [ 00:19:24.189 "malloc1", 00:19:24.189 "malloc2" 00:19:24.189 ], 00:19:24.189 "strip_size_kb": 64, 00:19:24.189 "superblock": false, 00:19:24.190 "method": "bdev_raid_create", 00:19:24.190 "req_id": 1 00:19:24.190 } 00:19:24.190 Got JSON-RPC error response 00:19:24.190 response: 00:19:24.190 { 00:19:24.190 "code": -17, 00:19:24.190 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:24.190 } 00:19:24.190 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:24.190 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:19:24.190 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:24.190 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:24.190 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:24.190 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:24.190 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.190 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.190 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.190 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.190 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:24.190 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:24.190 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:24.190 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.190 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.190 [2024-12-05 12:51:06.720097] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:24.190 [2024-12-05 12:51:06.720150] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:24.190 [2024-12-05 12:51:06.720164] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:24.190 [2024-12-05 12:51:06.720173] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:24.190 [2024-12-05 12:51:06.722004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:24.190 [2024-12-05 12:51:06.722038] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:24.190 [2024-12-05 12:51:06.722104] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:24.190 [2024-12-05 12:51:06.722144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:24.190 pt1 00:19:24.190 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.190 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:19:24.190 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:24.190 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:24.190 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:24.190 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:24.190 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:24.190 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.190 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.190 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.190 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.190 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.190 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.190 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.190 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.190 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.190 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.190 "name": "raid_bdev1", 00:19:24.190 "uuid": "eedb3d01-4bef-4980-a24e-76d728cdfad2", 00:19:24.190 "strip_size_kb": 64, 00:19:24.190 "state": "configuring", 00:19:24.190 "raid_level": "raid0", 00:19:24.190 "superblock": true, 00:19:24.190 "num_base_bdevs": 2, 00:19:24.190 "num_base_bdevs_discovered": 1, 00:19:24.190 "num_base_bdevs_operational": 2, 00:19:24.190 "base_bdevs_list": [ 00:19:24.190 { 00:19:24.190 "name": "pt1", 00:19:24.190 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:24.190 "is_configured": true, 00:19:24.190 "data_offset": 2048, 00:19:24.190 "data_size": 63488 00:19:24.190 }, 00:19:24.190 { 00:19:24.190 "name": null, 00:19:24.190 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:24.190 "is_configured": false, 00:19:24.190 "data_offset": 2048, 00:19:24.190 "data_size": 63488 00:19:24.190 } 00:19:24.190 ] 00:19:24.190 }' 00:19:24.190 12:51:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.190 12:51:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.765 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:24.765 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:24.765 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:24.765 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:24.765 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.765 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.765 [2024-12-05 12:51:07.060184] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:24.765 [2024-12-05 12:51:07.060235] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:24.765 [2024-12-05 12:51:07.060251] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:24.765 [2024-12-05 12:51:07.060260] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:24.765 [2024-12-05 12:51:07.060622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:24.765 [2024-12-05 12:51:07.060636] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:24.765 [2024-12-05 12:51:07.060696] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:24.765 [2024-12-05 12:51:07.060715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:24.765 [2024-12-05 12:51:07.060802] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:24.765 [2024-12-05 12:51:07.060812] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:24.765 [2024-12-05 12:51:07.061008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:24.765 [2024-12-05 12:51:07.061121] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:24.765 [2024-12-05 12:51:07.061133] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:24.765 [2024-12-05 12:51:07.061239] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:24.765 pt2 00:19:24.765 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.765 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:24.765 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:24.765 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:19:24.765 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:24.765 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:24.765 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:24.765 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:24.765 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:24.765 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.765 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.765 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.765 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.765 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.765 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.765 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.765 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.765 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.765 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.765 "name": "raid_bdev1", 00:19:24.765 "uuid": "eedb3d01-4bef-4980-a24e-76d728cdfad2", 00:19:24.765 "strip_size_kb": 64, 00:19:24.765 "state": "online", 00:19:24.765 "raid_level": "raid0", 00:19:24.765 "superblock": true, 00:19:24.765 "num_base_bdevs": 2, 00:19:24.765 "num_base_bdevs_discovered": 2, 00:19:24.766 "num_base_bdevs_operational": 2, 00:19:24.766 "base_bdevs_list": [ 00:19:24.766 { 00:19:24.766 "name": "pt1", 00:19:24.766 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:24.766 "is_configured": true, 00:19:24.766 "data_offset": 2048, 00:19:24.766 "data_size": 63488 00:19:24.766 }, 00:19:24.766 { 00:19:24.766 "name": "pt2", 00:19:24.766 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:24.766 "is_configured": true, 00:19:24.766 "data_offset": 2048, 00:19:24.766 "data_size": 63488 00:19:24.766 } 00:19:24.766 ] 00:19:24.766 }' 00:19:24.766 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.766 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:25.030 [2024-12-05 12:51:07.376456] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:25.030 "name": "raid_bdev1", 00:19:25.030 "aliases": [ 00:19:25.030 "eedb3d01-4bef-4980-a24e-76d728cdfad2" 00:19:25.030 ], 00:19:25.030 "product_name": "Raid Volume", 00:19:25.030 "block_size": 512, 00:19:25.030 "num_blocks": 126976, 00:19:25.030 "uuid": "eedb3d01-4bef-4980-a24e-76d728cdfad2", 00:19:25.030 "assigned_rate_limits": { 00:19:25.030 "rw_ios_per_sec": 0, 00:19:25.030 "rw_mbytes_per_sec": 0, 00:19:25.030 "r_mbytes_per_sec": 0, 00:19:25.030 "w_mbytes_per_sec": 0 00:19:25.030 }, 00:19:25.030 "claimed": false, 00:19:25.030 "zoned": false, 00:19:25.030 "supported_io_types": { 00:19:25.030 "read": true, 00:19:25.030 "write": true, 00:19:25.030 "unmap": true, 00:19:25.030 "flush": true, 00:19:25.030 "reset": true, 00:19:25.030 "nvme_admin": false, 00:19:25.030 "nvme_io": false, 00:19:25.030 "nvme_io_md": false, 00:19:25.030 "write_zeroes": true, 00:19:25.030 "zcopy": false, 00:19:25.030 "get_zone_info": false, 00:19:25.030 "zone_management": false, 00:19:25.030 "zone_append": false, 00:19:25.030 "compare": false, 00:19:25.030 "compare_and_write": false, 00:19:25.030 "abort": false, 00:19:25.030 "seek_hole": false, 00:19:25.030 "seek_data": false, 00:19:25.030 "copy": false, 00:19:25.030 "nvme_iov_md": false 00:19:25.030 }, 00:19:25.030 "memory_domains": [ 00:19:25.030 { 00:19:25.030 "dma_device_id": "system", 00:19:25.030 "dma_device_type": 1 00:19:25.030 }, 00:19:25.030 { 00:19:25.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.030 "dma_device_type": 2 00:19:25.030 }, 00:19:25.030 { 00:19:25.030 "dma_device_id": "system", 00:19:25.030 "dma_device_type": 1 00:19:25.030 }, 00:19:25.030 { 00:19:25.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.030 "dma_device_type": 2 00:19:25.030 } 00:19:25.030 ], 00:19:25.030 "driver_specific": { 00:19:25.030 "raid": { 00:19:25.030 "uuid": "eedb3d01-4bef-4980-a24e-76d728cdfad2", 00:19:25.030 "strip_size_kb": 64, 00:19:25.030 "state": "online", 00:19:25.030 "raid_level": "raid0", 00:19:25.030 "superblock": true, 00:19:25.030 "num_base_bdevs": 2, 00:19:25.030 "num_base_bdevs_discovered": 2, 00:19:25.030 "num_base_bdevs_operational": 2, 00:19:25.030 "base_bdevs_list": [ 00:19:25.030 { 00:19:25.030 "name": "pt1", 00:19:25.030 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:25.030 "is_configured": true, 00:19:25.030 "data_offset": 2048, 00:19:25.030 "data_size": 63488 00:19:25.030 }, 00:19:25.030 { 00:19:25.030 "name": "pt2", 00:19:25.030 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:25.030 "is_configured": true, 00:19:25.030 "data_offset": 2048, 00:19:25.030 "data_size": 63488 00:19:25.030 } 00:19:25.030 ] 00:19:25.030 } 00:19:25.030 } 00:19:25.030 }' 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:25.030 pt2' 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.030 [2024-12-05 12:51:07.528503] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' eedb3d01-4bef-4980-a24e-76d728cdfad2 '!=' eedb3d01-4bef-4980-a24e-76d728cdfad2 ']' 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 59813 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 59813 ']' 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 59813 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59813 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:25.030 killing process with pid 59813 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59813' 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 59813 00:19:25.030 12:51:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 59813 00:19:25.030 [2024-12-05 12:51:07.574713] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:25.030 [2024-12-05 12:51:07.574785] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:25.030 [2024-12-05 12:51:07.574827] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:25.030 [2024-12-05 12:51:07.574836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:25.292 [2024-12-05 12:51:07.679404] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:25.949 12:51:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:19:25.949 00:19:25.949 real 0m3.178s 00:19:25.949 user 0m4.581s 00:19:25.949 sys 0m0.471s 00:19:25.949 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:25.949 12:51:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.949 ************************************ 00:19:25.949 END TEST raid_superblock_test 00:19:25.949 ************************************ 00:19:25.949 12:51:08 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:19:25.949 12:51:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:25.949 12:51:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:25.949 12:51:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:25.949 ************************************ 00:19:25.949 START TEST raid_read_error_test 00:19:25.949 ************************************ 00:19:25.949 12:51:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:19:25.949 12:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:19:25.949 12:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:19:25.949 12:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:19:25.949 12:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:19:25.949 12:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:25.949 12:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:19:25.949 12:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:25.949 12:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:25.949 12:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:19:25.949 12:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:25.949 12:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:25.949 12:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:25.949 12:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:19:25.949 12:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:19:25.949 12:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:19:25.949 12:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:19:25.949 12:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:19:25.949 12:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:19:25.949 12:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:19:25.949 12:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:19:25.949 12:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:19:25.949 12:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:19:25.949 12:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.MO9uWrzdPF 00:19:25.949 12:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=60013 00:19:25.949 12:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 60013 00:19:25.949 12:51:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:25.949 12:51:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 60013 ']' 00:19:25.949 12:51:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.949 12:51:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:25.949 12:51:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.949 12:51:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:25.950 12:51:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.950 [2024-12-05 12:51:08.350683] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:19:25.950 [2024-12-05 12:51:08.350785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60013 ] 00:19:25.950 [2024-12-05 12:51:08.501569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.210 [2024-12-05 12:51:08.588334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.210 [2024-12-05 12:51:08.702290] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:26.210 [2024-12-05 12:51:08.702331] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:26.782 12:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:26.782 12:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:19:26.782 12:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:26.782 12:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:26.782 12:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.783 BaseBdev1_malloc 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.783 true 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.783 [2024-12-05 12:51:09.242761] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:26.783 [2024-12-05 12:51:09.242812] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:26.783 [2024-12-05 12:51:09.242830] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:26.783 [2024-12-05 12:51:09.242840] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:26.783 [2024-12-05 12:51:09.244644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:26.783 [2024-12-05 12:51:09.244682] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:26.783 BaseBdev1 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.783 BaseBdev2_malloc 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.783 true 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.783 [2024-12-05 12:51:09.282826] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:26.783 [2024-12-05 12:51:09.282873] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:26.783 [2024-12-05 12:51:09.282887] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:26.783 [2024-12-05 12:51:09.282896] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:26.783 [2024-12-05 12:51:09.284700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:26.783 [2024-12-05 12:51:09.284736] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:26.783 BaseBdev2 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.783 [2024-12-05 12:51:09.290878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:26.783 [2024-12-05 12:51:09.292414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:26.783 [2024-12-05 12:51:09.292590] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:26.783 [2024-12-05 12:51:09.292610] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:26.783 [2024-12-05 12:51:09.292821] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:26.783 [2024-12-05 12:51:09.292955] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:26.783 [2024-12-05 12:51:09.292969] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:26.783 [2024-12-05 12:51:09.293091] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:26.783 "name": "raid_bdev1", 00:19:26.783 "uuid": "cb437a37-9a63-4e56-8a32-ab363f94a078", 00:19:26.783 "strip_size_kb": 64, 00:19:26.783 "state": "online", 00:19:26.783 "raid_level": "raid0", 00:19:26.783 "superblock": true, 00:19:26.783 "num_base_bdevs": 2, 00:19:26.783 "num_base_bdevs_discovered": 2, 00:19:26.783 "num_base_bdevs_operational": 2, 00:19:26.783 "base_bdevs_list": [ 00:19:26.783 { 00:19:26.783 "name": "BaseBdev1", 00:19:26.783 "uuid": "90622ade-3223-5dd1-af47-53d28db53f8e", 00:19:26.783 "is_configured": true, 00:19:26.783 "data_offset": 2048, 00:19:26.783 "data_size": 63488 00:19:26.783 }, 00:19:26.783 { 00:19:26.783 "name": "BaseBdev2", 00:19:26.783 "uuid": "8841be5a-af41-5177-8eb2-47b4da90287e", 00:19:26.783 "is_configured": true, 00:19:26.783 "data_offset": 2048, 00:19:26.783 "data_size": 63488 00:19:26.783 } 00:19:26.783 ] 00:19:26.783 }' 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:26.783 12:51:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.043 12:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:19:27.043 12:51:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:27.303 [2024-12-05 12:51:09.707741] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:28.304 12:51:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:19:28.304 12:51:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.304 12:51:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.304 12:51:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.304 12:51:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:19:28.304 12:51:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:19:28.304 12:51:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:19:28.304 12:51:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:19:28.304 12:51:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:28.304 12:51:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:28.304 12:51:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:28.304 12:51:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:28.304 12:51:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:28.304 12:51:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:28.304 12:51:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:28.304 12:51:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:28.304 12:51:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:28.304 12:51:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.304 12:51:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.304 12:51:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.304 12:51:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.304 12:51:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.304 12:51:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:28.304 "name": "raid_bdev1", 00:19:28.304 "uuid": "cb437a37-9a63-4e56-8a32-ab363f94a078", 00:19:28.304 "strip_size_kb": 64, 00:19:28.304 "state": "online", 00:19:28.304 "raid_level": "raid0", 00:19:28.304 "superblock": true, 00:19:28.304 "num_base_bdevs": 2, 00:19:28.304 "num_base_bdevs_discovered": 2, 00:19:28.304 "num_base_bdevs_operational": 2, 00:19:28.304 "base_bdevs_list": [ 00:19:28.304 { 00:19:28.304 "name": "BaseBdev1", 00:19:28.304 "uuid": "90622ade-3223-5dd1-af47-53d28db53f8e", 00:19:28.304 "is_configured": true, 00:19:28.304 "data_offset": 2048, 00:19:28.304 "data_size": 63488 00:19:28.304 }, 00:19:28.304 { 00:19:28.304 "name": "BaseBdev2", 00:19:28.304 "uuid": "8841be5a-af41-5177-8eb2-47b4da90287e", 00:19:28.304 "is_configured": true, 00:19:28.304 "data_offset": 2048, 00:19:28.304 "data_size": 63488 00:19:28.304 } 00:19:28.304 ] 00:19:28.304 }' 00:19:28.304 12:51:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:28.304 12:51:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.566 12:51:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:28.566 12:51:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.566 12:51:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.566 [2024-12-05 12:51:10.997069] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:28.566 [2024-12-05 12:51:10.997102] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:28.566 [2024-12-05 12:51:10.999582] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:28.566 [2024-12-05 12:51:10.999622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:28.566 [2024-12-05 12:51:10.999648] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:28.566 [2024-12-05 12:51:10.999658] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:28.566 12:51:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.566 { 00:19:28.566 "results": [ 00:19:28.566 { 00:19:28.566 "job": "raid_bdev1", 00:19:28.566 "core_mask": "0x1", 00:19:28.566 "workload": "randrw", 00:19:28.566 "percentage": 50, 00:19:28.566 "status": "finished", 00:19:28.566 "queue_depth": 1, 00:19:28.566 "io_size": 131072, 00:19:28.566 "runtime": 1.287829, 00:19:28.566 "iops": 17211.91245110958, 00:19:28.566 "mibps": 2151.4890563886975, 00:19:28.566 "io_failed": 1, 00:19:28.566 "io_timeout": 0, 00:19:28.566 "avg_latency_us": 79.46161077971065, 00:19:28.566 "min_latency_us": 26.78153846153846, 00:19:28.566 "max_latency_us": 1323.323076923077 00:19:28.566 } 00:19:28.566 ], 00:19:28.566 "core_count": 1 00:19:28.566 } 00:19:28.566 12:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 60013 00:19:28.566 12:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 60013 ']' 00:19:28.566 12:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 60013 00:19:28.566 12:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:19:28.566 12:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.566 12:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60013 00:19:28.566 12:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:28.566 12:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:28.566 killing process with pid 60013 00:19:28.566 12:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60013' 00:19:28.566 12:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 60013 00:19:28.567 [2024-12-05 12:51:11.024128] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:28.567 12:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 60013 00:19:28.567 [2024-12-05 12:51:11.090218] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:29.139 12:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:19:29.139 12:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.MO9uWrzdPF 00:19:29.139 12:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:19:29.139 12:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.78 00:19:29.139 12:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:19:29.139 12:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:29.139 12:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:19:29.139 12:51:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.78 != \0\.\0\0 ]] 00:19:29.139 00:19:29.139 real 0m3.429s 00:19:29.139 user 0m4.210s 00:19:29.139 sys 0m0.365s 00:19:29.139 12:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:29.139 ************************************ 00:19:29.139 END TEST raid_read_error_test 00:19:29.139 ************************************ 00:19:29.139 12:51:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.401 12:51:11 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:19:29.401 12:51:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:29.401 12:51:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:29.401 12:51:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:29.401 ************************************ 00:19:29.401 START TEST raid_write_error_test 00:19:29.401 ************************************ 00:19:29.401 12:51:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:19:29.401 12:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:19:29.401 12:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:19:29.401 12:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:19:29.401 12:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:19:29.401 12:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:29.401 12:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:19:29.401 12:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:29.401 12:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:29.401 12:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:19:29.401 12:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:29.401 12:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:29.401 12:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:29.401 12:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:19:29.401 12:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:19:29.401 12:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:19:29.401 12:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:19:29.401 12:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:19:29.401 12:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:19:29.401 12:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:19:29.401 12:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:19:29.401 12:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:19:29.401 12:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:19:29.401 12:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.nWD0YMgZ4M 00:19:29.401 12:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=60142 00:19:29.401 12:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 60142 00:19:29.401 12:51:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:29.401 12:51:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 60142 ']' 00:19:29.401 12:51:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.401 12:51:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:29.401 12:51:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.401 12:51:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:29.401 12:51:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.401 [2024-12-05 12:51:11.854421] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:19:29.401 [2024-12-05 12:51:11.854556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60142 ] 00:19:29.663 [2024-12-05 12:51:12.010569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.663 [2024-12-05 12:51:12.115868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.924 [2024-12-05 12:51:12.255739] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:29.925 [2024-12-05 12:51:12.255786] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:30.185 12:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:30.186 12:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:19:30.186 12:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:30.186 12:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:30.186 12:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.186 12:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.186 BaseBdev1_malloc 00:19:30.186 12:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.186 12:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:19:30.186 12:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.186 12:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.186 true 00:19:30.186 12:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.186 12:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:30.186 12:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.186 12:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.186 [2024-12-05 12:51:12.744055] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:30.186 [2024-12-05 12:51:12.744112] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:30.186 [2024-12-05 12:51:12.744132] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:30.186 [2024-12-05 12:51:12.744143] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:30.186 [2024-12-05 12:51:12.746315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:30.186 [2024-12-05 12:51:12.746356] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:30.186 BaseBdev1 00:19:30.186 12:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.186 12:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:30.186 12:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:30.186 12:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.186 12:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.446 BaseBdev2_malloc 00:19:30.446 12:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.446 12:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:19:30.446 12:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.446 12:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.446 true 00:19:30.446 12:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.446 12:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:30.446 12:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.446 12:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.446 [2024-12-05 12:51:12.788291] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:30.446 [2024-12-05 12:51:12.788346] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:30.446 [2024-12-05 12:51:12.788364] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:30.446 [2024-12-05 12:51:12.788374] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:30.446 [2024-12-05 12:51:12.790551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:30.446 [2024-12-05 12:51:12.790587] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:30.446 BaseBdev2 00:19:30.446 12:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.446 12:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:19:30.446 12:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.446 12:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.446 [2024-12-05 12:51:12.800362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:30.446 [2024-12-05 12:51:12.802270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:30.446 [2024-12-05 12:51:12.802476] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:30.446 [2024-12-05 12:51:12.802518] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:30.446 [2024-12-05 12:51:12.802794] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:30.446 [2024-12-05 12:51:12.802977] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:30.446 [2024-12-05 12:51:12.802998] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:30.446 [2024-12-05 12:51:12.803154] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:30.446 12:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.446 12:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:19:30.446 12:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:30.446 12:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:30.446 12:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:30.446 12:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:30.446 12:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:30.446 12:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.446 12:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.446 12:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.446 12:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.446 12:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.446 12:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.446 12:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.446 12:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.446 12:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.446 12:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.446 "name": "raid_bdev1", 00:19:30.446 "uuid": "29ea8b18-e44f-477a-9616-e14b4245450a", 00:19:30.446 "strip_size_kb": 64, 00:19:30.446 "state": "online", 00:19:30.446 "raid_level": "raid0", 00:19:30.446 "superblock": true, 00:19:30.446 "num_base_bdevs": 2, 00:19:30.446 "num_base_bdevs_discovered": 2, 00:19:30.446 "num_base_bdevs_operational": 2, 00:19:30.446 "base_bdevs_list": [ 00:19:30.446 { 00:19:30.446 "name": "BaseBdev1", 00:19:30.446 "uuid": "342923af-fc54-5925-9009-edfd8b3def24", 00:19:30.446 "is_configured": true, 00:19:30.446 "data_offset": 2048, 00:19:30.446 "data_size": 63488 00:19:30.446 }, 00:19:30.446 { 00:19:30.447 "name": "BaseBdev2", 00:19:30.447 "uuid": "85aec4eb-9bfb-508c-ac09-bd9d43dec548", 00:19:30.447 "is_configured": true, 00:19:30.447 "data_offset": 2048, 00:19:30.447 "data_size": 63488 00:19:30.447 } 00:19:30.447 ] 00:19:30.447 }' 00:19:30.447 12:51:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.447 12:51:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.706 12:51:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:19:30.706 12:51:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:30.706 [2024-12-05 12:51:13.217366] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:31.643 12:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:19:31.643 12:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.643 12:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.643 12:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.643 12:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:19:31.643 12:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:19:31.643 12:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:19:31.643 12:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:19:31.643 12:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:31.643 12:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:31.643 12:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:19:31.643 12:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:31.643 12:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:31.643 12:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.643 12:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.643 12:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.643 12:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.643 12:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.644 12:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.644 12:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.644 12:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.644 12:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.644 12:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.644 "name": "raid_bdev1", 00:19:31.644 "uuid": "29ea8b18-e44f-477a-9616-e14b4245450a", 00:19:31.644 "strip_size_kb": 64, 00:19:31.644 "state": "online", 00:19:31.644 "raid_level": "raid0", 00:19:31.644 "superblock": true, 00:19:31.644 "num_base_bdevs": 2, 00:19:31.644 "num_base_bdevs_discovered": 2, 00:19:31.644 "num_base_bdevs_operational": 2, 00:19:31.644 "base_bdevs_list": [ 00:19:31.644 { 00:19:31.644 "name": "BaseBdev1", 00:19:31.644 "uuid": "342923af-fc54-5925-9009-edfd8b3def24", 00:19:31.644 "is_configured": true, 00:19:31.644 "data_offset": 2048, 00:19:31.644 "data_size": 63488 00:19:31.644 }, 00:19:31.644 { 00:19:31.644 "name": "BaseBdev2", 00:19:31.644 "uuid": "85aec4eb-9bfb-508c-ac09-bd9d43dec548", 00:19:31.644 "is_configured": true, 00:19:31.644 "data_offset": 2048, 00:19:31.644 "data_size": 63488 00:19:31.644 } 00:19:31.644 ] 00:19:31.644 }' 00:19:31.644 12:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.644 12:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.905 12:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:31.905 12:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.905 12:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.905 [2024-12-05 12:51:14.439545] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:31.905 [2024-12-05 12:51:14.439583] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:31.905 [2024-12-05 12:51:14.442694] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:31.905 [2024-12-05 12:51:14.442749] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:31.905 [2024-12-05 12:51:14.442785] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:31.905 [2024-12-05 12:51:14.442797] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:31.905 { 00:19:31.905 "results": [ 00:19:31.905 { 00:19:31.905 "job": "raid_bdev1", 00:19:31.905 "core_mask": "0x1", 00:19:31.905 "workload": "randrw", 00:19:31.905 "percentage": 50, 00:19:31.905 "status": "finished", 00:19:31.905 "queue_depth": 1, 00:19:31.905 "io_size": 131072, 00:19:31.905 "runtime": 1.22024, 00:19:31.905 "iops": 14026.748836294499, 00:19:31.905 "mibps": 1753.3436045368123, 00:19:31.905 "io_failed": 1, 00:19:31.905 "io_timeout": 0, 00:19:31.905 "avg_latency_us": 97.36436273430373, 00:19:31.905 "min_latency_us": 34.067692307692305, 00:19:31.905 "max_latency_us": 1688.8123076923077 00:19:31.905 } 00:19:31.905 ], 00:19:31.905 "core_count": 1 00:19:31.905 } 00:19:31.905 12:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.905 12:51:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 60142 00:19:31.905 12:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 60142 ']' 00:19:31.905 12:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 60142 00:19:31.905 12:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:19:31.905 12:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:31.905 12:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60142 00:19:31.905 killing process with pid 60142 00:19:31.905 12:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:31.905 12:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:31.905 12:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60142' 00:19:31.905 12:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 60142 00:19:31.905 12:51:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 60142 00:19:31.905 [2024-12-05 12:51:14.468411] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:32.165 [2024-12-05 12:51:14.555676] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:33.101 12:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.nWD0YMgZ4M 00:19:33.101 12:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:19:33.101 12:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:19:33.101 12:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.82 00:19:33.101 12:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:19:33.101 12:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:33.101 12:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:19:33.101 12:51:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.82 != \0\.\0\0 ]] 00:19:33.101 00:19:33.101 real 0m3.559s 00:19:33.101 user 0m4.248s 00:19:33.101 sys 0m0.379s 00:19:33.101 ************************************ 00:19:33.101 END TEST raid_write_error_test 00:19:33.101 12:51:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:33.101 12:51:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.101 ************************************ 00:19:33.101 12:51:15 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:19:33.101 12:51:15 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:19:33.101 12:51:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:33.101 12:51:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:33.101 12:51:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:33.101 ************************************ 00:19:33.101 START TEST raid_state_function_test 00:19:33.101 ************************************ 00:19:33.101 12:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:19:33.101 12:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:19:33.101 12:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:33.101 12:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:19:33.101 12:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:33.101 12:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:33.101 12:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:33.101 12:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:33.101 12:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:33.101 12:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:33.101 12:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:33.101 12:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:33.101 12:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:33.101 12:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:33.101 12:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:33.101 12:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:33.101 12:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:33.101 12:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:33.101 12:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:33.101 12:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:19:33.101 12:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:33.101 12:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:33.101 12:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:19:33.101 12:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:19:33.101 12:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60275 00:19:33.101 Process raid pid: 60275 00:19:33.101 12:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60275' 00:19:33.101 12:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60275 00:19:33.101 12:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60275 ']' 00:19:33.101 12:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.101 12:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:33.101 12:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:33.101 12:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.101 12:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:33.101 12:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.101 [2024-12-05 12:51:15.482623] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:19:33.101 [2024-12-05 12:51:15.483239] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:33.101 [2024-12-05 12:51:15.646041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.360 [2024-12-05 12:51:15.750412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.360 [2024-12-05 12:51:15.890900] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:33.360 [2024-12-05 12:51:15.890943] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:33.949 12:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:33.949 12:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:19:33.949 12:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:33.949 12:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.949 12:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.949 [2024-12-05 12:51:16.331628] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:33.949 [2024-12-05 12:51:16.331683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:33.949 [2024-12-05 12:51:16.331693] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:33.949 [2024-12-05 12:51:16.331703] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:33.949 12:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.949 12:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:19:33.949 12:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:33.949 12:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:33.949 12:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:33.949 12:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:33.949 12:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:33.949 12:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:33.949 12:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:33.949 12:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:33.949 12:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:33.949 12:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:33.949 12:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.949 12:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.949 12:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.949 12:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.949 12:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:33.949 "name": "Existed_Raid", 00:19:33.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.949 "strip_size_kb": 64, 00:19:33.949 "state": "configuring", 00:19:33.949 "raid_level": "concat", 00:19:33.949 "superblock": false, 00:19:33.949 "num_base_bdevs": 2, 00:19:33.949 "num_base_bdevs_discovered": 0, 00:19:33.949 "num_base_bdevs_operational": 2, 00:19:33.949 "base_bdevs_list": [ 00:19:33.949 { 00:19:33.949 "name": "BaseBdev1", 00:19:33.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.949 "is_configured": false, 00:19:33.949 "data_offset": 0, 00:19:33.949 "data_size": 0 00:19:33.949 }, 00:19:33.949 { 00:19:33.949 "name": "BaseBdev2", 00:19:33.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.949 "is_configured": false, 00:19:33.949 "data_offset": 0, 00:19:33.949 "data_size": 0 00:19:33.949 } 00:19:33.949 ] 00:19:33.949 }' 00:19:33.949 12:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:33.949 12:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.211 [2024-12-05 12:51:16.643664] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:34.211 [2024-12-05 12:51:16.643699] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.211 [2024-12-05 12:51:16.651661] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:34.211 [2024-12-05 12:51:16.651701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:34.211 [2024-12-05 12:51:16.651711] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:34.211 [2024-12-05 12:51:16.651722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.211 [2024-12-05 12:51:16.684550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:34.211 BaseBdev1 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.211 [ 00:19:34.211 { 00:19:34.211 "name": "BaseBdev1", 00:19:34.211 "aliases": [ 00:19:34.211 "1b20b6d9-791e-4182-8d82-728c03e28e3f" 00:19:34.211 ], 00:19:34.211 "product_name": "Malloc disk", 00:19:34.211 "block_size": 512, 00:19:34.211 "num_blocks": 65536, 00:19:34.211 "uuid": "1b20b6d9-791e-4182-8d82-728c03e28e3f", 00:19:34.211 "assigned_rate_limits": { 00:19:34.211 "rw_ios_per_sec": 0, 00:19:34.211 "rw_mbytes_per_sec": 0, 00:19:34.211 "r_mbytes_per_sec": 0, 00:19:34.211 "w_mbytes_per_sec": 0 00:19:34.211 }, 00:19:34.211 "claimed": true, 00:19:34.211 "claim_type": "exclusive_write", 00:19:34.211 "zoned": false, 00:19:34.211 "supported_io_types": { 00:19:34.211 "read": true, 00:19:34.211 "write": true, 00:19:34.211 "unmap": true, 00:19:34.211 "flush": true, 00:19:34.211 "reset": true, 00:19:34.211 "nvme_admin": false, 00:19:34.211 "nvme_io": false, 00:19:34.211 "nvme_io_md": false, 00:19:34.211 "write_zeroes": true, 00:19:34.211 "zcopy": true, 00:19:34.211 "get_zone_info": false, 00:19:34.211 "zone_management": false, 00:19:34.211 "zone_append": false, 00:19:34.211 "compare": false, 00:19:34.211 "compare_and_write": false, 00:19:34.211 "abort": true, 00:19:34.211 "seek_hole": false, 00:19:34.211 "seek_data": false, 00:19:34.211 "copy": true, 00:19:34.211 "nvme_iov_md": false 00:19:34.211 }, 00:19:34.211 "memory_domains": [ 00:19:34.211 { 00:19:34.211 "dma_device_id": "system", 00:19:34.211 "dma_device_type": 1 00:19:34.211 }, 00:19:34.211 { 00:19:34.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.211 "dma_device_type": 2 00:19:34.211 } 00:19:34.211 ], 00:19:34.211 "driver_specific": {} 00:19:34.211 } 00:19:34.211 ] 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.211 12:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:34.211 "name": "Existed_Raid", 00:19:34.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.211 "strip_size_kb": 64, 00:19:34.211 "state": "configuring", 00:19:34.211 "raid_level": "concat", 00:19:34.211 "superblock": false, 00:19:34.211 "num_base_bdevs": 2, 00:19:34.211 "num_base_bdevs_discovered": 1, 00:19:34.211 "num_base_bdevs_operational": 2, 00:19:34.211 "base_bdevs_list": [ 00:19:34.211 { 00:19:34.211 "name": "BaseBdev1", 00:19:34.211 "uuid": "1b20b6d9-791e-4182-8d82-728c03e28e3f", 00:19:34.211 "is_configured": true, 00:19:34.211 "data_offset": 0, 00:19:34.211 "data_size": 65536 00:19:34.211 }, 00:19:34.211 { 00:19:34.211 "name": "BaseBdev2", 00:19:34.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.211 "is_configured": false, 00:19:34.211 "data_offset": 0, 00:19:34.211 "data_size": 0 00:19:34.211 } 00:19:34.211 ] 00:19:34.212 }' 00:19:34.212 12:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:34.212 12:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.472 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:34.472 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.472 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.472 [2024-12-05 12:51:17.016655] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:34.472 [2024-12-05 12:51:17.016704] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:34.472 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.472 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:34.472 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.472 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.472 [2024-12-05 12:51:17.024698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:34.472 [2024-12-05 12:51:17.026545] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:34.472 [2024-12-05 12:51:17.026584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:34.472 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.472 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:34.472 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:34.472 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:19:34.472 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:34.472 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:34.472 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:34.472 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:34.472 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:34.472 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:34.472 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:34.472 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:34.472 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:34.472 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.472 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:34.472 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.472 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.472 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.730 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:34.730 "name": "Existed_Raid", 00:19:34.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.730 "strip_size_kb": 64, 00:19:34.730 "state": "configuring", 00:19:34.730 "raid_level": "concat", 00:19:34.730 "superblock": false, 00:19:34.730 "num_base_bdevs": 2, 00:19:34.730 "num_base_bdevs_discovered": 1, 00:19:34.730 "num_base_bdevs_operational": 2, 00:19:34.730 "base_bdevs_list": [ 00:19:34.730 { 00:19:34.730 "name": "BaseBdev1", 00:19:34.730 "uuid": "1b20b6d9-791e-4182-8d82-728c03e28e3f", 00:19:34.730 "is_configured": true, 00:19:34.730 "data_offset": 0, 00:19:34.730 "data_size": 65536 00:19:34.730 }, 00:19:34.730 { 00:19:34.730 "name": "BaseBdev2", 00:19:34.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.730 "is_configured": false, 00:19:34.730 "data_offset": 0, 00:19:34.730 "data_size": 0 00:19:34.730 } 00:19:34.730 ] 00:19:34.730 }' 00:19:34.730 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:34.730 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.991 [2024-12-05 12:51:17.367842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:34.991 [2024-12-05 12:51:17.367891] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:34.991 [2024-12-05 12:51:17.367899] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:19:34.991 [2024-12-05 12:51:17.368160] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:34.991 [2024-12-05 12:51:17.368308] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:34.991 [2024-12-05 12:51:17.368328] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:34.991 [2024-12-05 12:51:17.368579] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:34.991 BaseBdev2 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.991 [ 00:19:34.991 { 00:19:34.991 "name": "BaseBdev2", 00:19:34.991 "aliases": [ 00:19:34.991 "6883585c-a4d0-4cd7-854d-33f6f321f010" 00:19:34.991 ], 00:19:34.991 "product_name": "Malloc disk", 00:19:34.991 "block_size": 512, 00:19:34.991 "num_blocks": 65536, 00:19:34.991 "uuid": "6883585c-a4d0-4cd7-854d-33f6f321f010", 00:19:34.991 "assigned_rate_limits": { 00:19:34.991 "rw_ios_per_sec": 0, 00:19:34.991 "rw_mbytes_per_sec": 0, 00:19:34.991 "r_mbytes_per_sec": 0, 00:19:34.991 "w_mbytes_per_sec": 0 00:19:34.991 }, 00:19:34.991 "claimed": true, 00:19:34.991 "claim_type": "exclusive_write", 00:19:34.991 "zoned": false, 00:19:34.991 "supported_io_types": { 00:19:34.991 "read": true, 00:19:34.991 "write": true, 00:19:34.991 "unmap": true, 00:19:34.991 "flush": true, 00:19:34.991 "reset": true, 00:19:34.991 "nvme_admin": false, 00:19:34.991 "nvme_io": false, 00:19:34.991 "nvme_io_md": false, 00:19:34.991 "write_zeroes": true, 00:19:34.991 "zcopy": true, 00:19:34.991 "get_zone_info": false, 00:19:34.991 "zone_management": false, 00:19:34.991 "zone_append": false, 00:19:34.991 "compare": false, 00:19:34.991 "compare_and_write": false, 00:19:34.991 "abort": true, 00:19:34.991 "seek_hole": false, 00:19:34.991 "seek_data": false, 00:19:34.991 "copy": true, 00:19:34.991 "nvme_iov_md": false 00:19:34.991 }, 00:19:34.991 "memory_domains": [ 00:19:34.991 { 00:19:34.991 "dma_device_id": "system", 00:19:34.991 "dma_device_type": 1 00:19:34.991 }, 00:19:34.991 { 00:19:34.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.991 "dma_device_type": 2 00:19:34.991 } 00:19:34.991 ], 00:19:34.991 "driver_specific": {} 00:19:34.991 } 00:19:34.991 ] 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:34.991 "name": "Existed_Raid", 00:19:34.991 "uuid": "ba575515-b04d-43ff-a4d5-886eb56fe7b9", 00:19:34.991 "strip_size_kb": 64, 00:19:34.991 "state": "online", 00:19:34.991 "raid_level": "concat", 00:19:34.991 "superblock": false, 00:19:34.991 "num_base_bdevs": 2, 00:19:34.991 "num_base_bdevs_discovered": 2, 00:19:34.991 "num_base_bdevs_operational": 2, 00:19:34.991 "base_bdevs_list": [ 00:19:34.991 { 00:19:34.991 "name": "BaseBdev1", 00:19:34.991 "uuid": "1b20b6d9-791e-4182-8d82-728c03e28e3f", 00:19:34.991 "is_configured": true, 00:19:34.991 "data_offset": 0, 00:19:34.991 "data_size": 65536 00:19:34.991 }, 00:19:34.991 { 00:19:34.991 "name": "BaseBdev2", 00:19:34.991 "uuid": "6883585c-a4d0-4cd7-854d-33f6f321f010", 00:19:34.991 "is_configured": true, 00:19:34.991 "data_offset": 0, 00:19:34.991 "data_size": 65536 00:19:34.991 } 00:19:34.991 ] 00:19:34.991 }' 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:34.991 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.251 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:35.251 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:35.251 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:35.251 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:35.251 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:35.251 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:35.251 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:35.251 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:35.251 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.251 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.251 [2024-12-05 12:51:17.716259] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:35.251 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.251 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:35.251 "name": "Existed_Raid", 00:19:35.251 "aliases": [ 00:19:35.251 "ba575515-b04d-43ff-a4d5-886eb56fe7b9" 00:19:35.251 ], 00:19:35.251 "product_name": "Raid Volume", 00:19:35.251 "block_size": 512, 00:19:35.251 "num_blocks": 131072, 00:19:35.251 "uuid": "ba575515-b04d-43ff-a4d5-886eb56fe7b9", 00:19:35.251 "assigned_rate_limits": { 00:19:35.251 "rw_ios_per_sec": 0, 00:19:35.251 "rw_mbytes_per_sec": 0, 00:19:35.251 "r_mbytes_per_sec": 0, 00:19:35.251 "w_mbytes_per_sec": 0 00:19:35.251 }, 00:19:35.251 "claimed": false, 00:19:35.251 "zoned": false, 00:19:35.251 "supported_io_types": { 00:19:35.251 "read": true, 00:19:35.251 "write": true, 00:19:35.251 "unmap": true, 00:19:35.251 "flush": true, 00:19:35.251 "reset": true, 00:19:35.251 "nvme_admin": false, 00:19:35.251 "nvme_io": false, 00:19:35.251 "nvme_io_md": false, 00:19:35.251 "write_zeroes": true, 00:19:35.251 "zcopy": false, 00:19:35.251 "get_zone_info": false, 00:19:35.251 "zone_management": false, 00:19:35.251 "zone_append": false, 00:19:35.251 "compare": false, 00:19:35.251 "compare_and_write": false, 00:19:35.251 "abort": false, 00:19:35.251 "seek_hole": false, 00:19:35.251 "seek_data": false, 00:19:35.251 "copy": false, 00:19:35.251 "nvme_iov_md": false 00:19:35.251 }, 00:19:35.251 "memory_domains": [ 00:19:35.251 { 00:19:35.251 "dma_device_id": "system", 00:19:35.251 "dma_device_type": 1 00:19:35.251 }, 00:19:35.251 { 00:19:35.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.251 "dma_device_type": 2 00:19:35.251 }, 00:19:35.251 { 00:19:35.251 "dma_device_id": "system", 00:19:35.251 "dma_device_type": 1 00:19:35.251 }, 00:19:35.251 { 00:19:35.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.251 "dma_device_type": 2 00:19:35.251 } 00:19:35.251 ], 00:19:35.251 "driver_specific": { 00:19:35.251 "raid": { 00:19:35.251 "uuid": "ba575515-b04d-43ff-a4d5-886eb56fe7b9", 00:19:35.251 "strip_size_kb": 64, 00:19:35.251 "state": "online", 00:19:35.251 "raid_level": "concat", 00:19:35.251 "superblock": false, 00:19:35.251 "num_base_bdevs": 2, 00:19:35.251 "num_base_bdevs_discovered": 2, 00:19:35.251 "num_base_bdevs_operational": 2, 00:19:35.251 "base_bdevs_list": [ 00:19:35.251 { 00:19:35.251 "name": "BaseBdev1", 00:19:35.251 "uuid": "1b20b6d9-791e-4182-8d82-728c03e28e3f", 00:19:35.251 "is_configured": true, 00:19:35.251 "data_offset": 0, 00:19:35.251 "data_size": 65536 00:19:35.251 }, 00:19:35.251 { 00:19:35.251 "name": "BaseBdev2", 00:19:35.251 "uuid": "6883585c-a4d0-4cd7-854d-33f6f321f010", 00:19:35.251 "is_configured": true, 00:19:35.251 "data_offset": 0, 00:19:35.251 "data_size": 65536 00:19:35.251 } 00:19:35.251 ] 00:19:35.251 } 00:19:35.251 } 00:19:35.251 }' 00:19:35.252 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:35.252 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:35.252 BaseBdev2' 00:19:35.252 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:35.252 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:35.252 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:35.252 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:35.252 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.252 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.252 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:35.252 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.512 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:35.512 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:35.512 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:35.512 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:35.512 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.512 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.512 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:35.512 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.512 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:35.512 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:35.512 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:35.512 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.512 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.512 [2024-12-05 12:51:17.880046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:35.512 [2024-12-05 12:51:17.880083] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:35.512 [2024-12-05 12:51:17.880131] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:35.512 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.512 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:35.512 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:19:35.512 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:35.512 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:19:35.512 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:19:35.512 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:19:35.512 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:35.512 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:19:35.512 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:35.512 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:35.512 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:35.512 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.512 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.512 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.512 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.512 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.512 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.512 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.512 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:35.512 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.512 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.512 "name": "Existed_Raid", 00:19:35.512 "uuid": "ba575515-b04d-43ff-a4d5-886eb56fe7b9", 00:19:35.512 "strip_size_kb": 64, 00:19:35.512 "state": "offline", 00:19:35.512 "raid_level": "concat", 00:19:35.512 "superblock": false, 00:19:35.512 "num_base_bdevs": 2, 00:19:35.512 "num_base_bdevs_discovered": 1, 00:19:35.512 "num_base_bdevs_operational": 1, 00:19:35.512 "base_bdevs_list": [ 00:19:35.512 { 00:19:35.512 "name": null, 00:19:35.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.512 "is_configured": false, 00:19:35.512 "data_offset": 0, 00:19:35.512 "data_size": 65536 00:19:35.512 }, 00:19:35.512 { 00:19:35.512 "name": "BaseBdev2", 00:19:35.512 "uuid": "6883585c-a4d0-4cd7-854d-33f6f321f010", 00:19:35.512 "is_configured": true, 00:19:35.512 "data_offset": 0, 00:19:35.512 "data_size": 65536 00:19:35.512 } 00:19:35.512 ] 00:19:35.512 }' 00:19:35.512 12:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.512 12:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.773 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:35.773 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:35.773 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.773 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:35.773 12:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.773 12:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.773 12:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.773 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:35.773 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:35.773 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:35.773 12:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.773 12:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.773 [2024-12-05 12:51:18.291159] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:35.773 [2024-12-05 12:51:18.291208] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:35.773 12:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.773 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:35.773 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:35.773 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.773 12:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.773 12:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.773 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:36.034 12:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.034 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:36.034 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:36.034 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:36.034 12:51:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60275 00:19:36.034 12:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60275 ']' 00:19:36.034 12:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60275 00:19:36.034 12:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:19:36.034 12:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:36.034 12:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60275 00:19:36.034 12:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:36.034 12:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:36.034 killing process with pid 60275 00:19:36.034 12:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60275' 00:19:36.034 12:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60275 00:19:36.034 [2024-12-05 12:51:18.409881] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:36.034 12:51:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60275 00:19:36.034 [2024-12-05 12:51:18.420388] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:36.606 12:51:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:19:36.606 ************************************ 00:19:36.606 END TEST raid_state_function_test 00:19:36.606 ************************************ 00:19:36.606 00:19:36.606 real 0m3.726s 00:19:36.606 user 0m5.370s 00:19:36.606 sys 0m0.558s 00:19:36.606 12:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:36.606 12:51:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.606 12:51:19 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:19:36.606 12:51:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:36.606 12:51:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:36.606 12:51:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:36.606 ************************************ 00:19:36.606 START TEST raid_state_function_test_sb 00:19:36.606 ************************************ 00:19:36.606 12:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:19:36.606 12:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:19:36.606 12:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:36.606 12:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:36.606 12:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:36.606 12:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:36.606 12:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:36.606 12:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:36.606 12:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:36.606 12:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:36.606 12:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:36.606 12:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:36.606 12:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:36.606 12:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:36.606 12:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:36.606 12:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:36.606 12:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:36.606 12:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:36.606 12:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:36.606 12:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:19:36.606 12:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:36.606 12:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:36.606 12:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:36.606 12:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:36.606 12:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60511 00:19:36.606 12:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60511' 00:19:36.606 Process raid pid: 60511 00:19:36.606 12:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60511 00:19:36.606 12:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60511 ']' 00:19:36.606 12:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.606 12:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:36.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.606 12:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.606 12:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:36.606 12:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.606 12:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:36.866 [2024-12-05 12:51:19.248290] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:19:36.866 [2024-12-05 12:51:19.248415] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.866 [2024-12-05 12:51:19.409533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.139 [2024-12-05 12:51:19.512661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.139 [2024-12-05 12:51:19.651052] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:37.139 [2024-12-05 12:51:19.651082] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:37.706 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:37.706 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:19:37.706 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:37.706 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.706 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.706 [2024-12-05 12:51:20.107650] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:37.706 [2024-12-05 12:51:20.107708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:37.706 [2024-12-05 12:51:20.107718] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:37.706 [2024-12-05 12:51:20.107728] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:37.706 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.706 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:19:37.706 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:37.706 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:37.706 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:37.706 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:37.706 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:37.706 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:37.706 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:37.706 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:37.706 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:37.706 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.706 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.706 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.706 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:37.706 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.706 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:37.706 "name": "Existed_Raid", 00:19:37.706 "uuid": "c40947f2-5e48-4f56-b004-71a425211e71", 00:19:37.706 "strip_size_kb": 64, 00:19:37.706 "state": "configuring", 00:19:37.706 "raid_level": "concat", 00:19:37.706 "superblock": true, 00:19:37.706 "num_base_bdevs": 2, 00:19:37.706 "num_base_bdevs_discovered": 0, 00:19:37.706 "num_base_bdevs_operational": 2, 00:19:37.706 "base_bdevs_list": [ 00:19:37.706 { 00:19:37.706 "name": "BaseBdev1", 00:19:37.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.706 "is_configured": false, 00:19:37.707 "data_offset": 0, 00:19:37.707 "data_size": 0 00:19:37.707 }, 00:19:37.707 { 00:19:37.707 "name": "BaseBdev2", 00:19:37.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.707 "is_configured": false, 00:19:37.707 "data_offset": 0, 00:19:37.707 "data_size": 0 00:19:37.707 } 00:19:37.707 ] 00:19:37.707 }' 00:19:37.707 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:37.707 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.967 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:37.967 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.967 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.967 [2024-12-05 12:51:20.411652] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:37.967 [2024-12-05 12:51:20.411686] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:37.967 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.967 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:37.967 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.967 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.967 [2024-12-05 12:51:20.419663] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:37.967 [2024-12-05 12:51:20.419703] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:37.967 [2024-12-05 12:51:20.419711] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:37.967 [2024-12-05 12:51:20.419722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:37.967 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.967 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:37.967 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.967 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.967 [2024-12-05 12:51:20.452638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:37.967 BaseBdev1 00:19:37.967 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.967 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:37.967 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:37.967 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:37.967 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:37.967 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:37.967 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:37.967 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:37.967 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.967 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.967 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.967 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:37.967 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.967 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.967 [ 00:19:37.967 { 00:19:37.967 "name": "BaseBdev1", 00:19:37.967 "aliases": [ 00:19:37.967 "41d72035-143b-498f-93e2-f9b6b8daf34e" 00:19:37.967 ], 00:19:37.967 "product_name": "Malloc disk", 00:19:37.967 "block_size": 512, 00:19:37.967 "num_blocks": 65536, 00:19:37.967 "uuid": "41d72035-143b-498f-93e2-f9b6b8daf34e", 00:19:37.967 "assigned_rate_limits": { 00:19:37.967 "rw_ios_per_sec": 0, 00:19:37.967 "rw_mbytes_per_sec": 0, 00:19:37.967 "r_mbytes_per_sec": 0, 00:19:37.967 "w_mbytes_per_sec": 0 00:19:37.967 }, 00:19:37.967 "claimed": true, 00:19:37.967 "claim_type": "exclusive_write", 00:19:37.967 "zoned": false, 00:19:37.967 "supported_io_types": { 00:19:37.967 "read": true, 00:19:37.967 "write": true, 00:19:37.967 "unmap": true, 00:19:37.967 "flush": true, 00:19:37.967 "reset": true, 00:19:37.967 "nvme_admin": false, 00:19:37.968 "nvme_io": false, 00:19:37.968 "nvme_io_md": false, 00:19:37.968 "write_zeroes": true, 00:19:37.968 "zcopy": true, 00:19:37.968 "get_zone_info": false, 00:19:37.968 "zone_management": false, 00:19:37.968 "zone_append": false, 00:19:37.968 "compare": false, 00:19:37.968 "compare_and_write": false, 00:19:37.968 "abort": true, 00:19:37.968 "seek_hole": false, 00:19:37.968 "seek_data": false, 00:19:37.968 "copy": true, 00:19:37.968 "nvme_iov_md": false 00:19:37.968 }, 00:19:37.968 "memory_domains": [ 00:19:37.968 { 00:19:37.968 "dma_device_id": "system", 00:19:37.968 "dma_device_type": 1 00:19:37.968 }, 00:19:37.968 { 00:19:37.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.968 "dma_device_type": 2 00:19:37.968 } 00:19:37.968 ], 00:19:37.968 "driver_specific": {} 00:19:37.968 } 00:19:37.968 ] 00:19:37.968 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.968 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:37.968 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:19:37.968 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:37.968 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:37.968 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:37.968 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:37.968 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:37.968 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:37.968 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:37.968 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:37.968 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:37.968 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.968 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:37.968 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.968 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.968 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.968 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:37.968 "name": "Existed_Raid", 00:19:37.968 "uuid": "2c746429-5a57-4935-884e-43f1b778df01", 00:19:37.968 "strip_size_kb": 64, 00:19:37.968 "state": "configuring", 00:19:37.968 "raid_level": "concat", 00:19:37.968 "superblock": true, 00:19:37.968 "num_base_bdevs": 2, 00:19:37.968 "num_base_bdevs_discovered": 1, 00:19:37.968 "num_base_bdevs_operational": 2, 00:19:37.968 "base_bdevs_list": [ 00:19:37.968 { 00:19:37.968 "name": "BaseBdev1", 00:19:37.968 "uuid": "41d72035-143b-498f-93e2-f9b6b8daf34e", 00:19:37.968 "is_configured": true, 00:19:37.968 "data_offset": 2048, 00:19:37.968 "data_size": 63488 00:19:37.968 }, 00:19:37.968 { 00:19:37.968 "name": "BaseBdev2", 00:19:37.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.968 "is_configured": false, 00:19:37.968 "data_offset": 0, 00:19:37.968 "data_size": 0 00:19:37.968 } 00:19:37.968 ] 00:19:37.968 }' 00:19:37.968 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:37.968 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.228 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:38.228 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.228 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.228 [2024-12-05 12:51:20.800772] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:38.228 [2024-12-05 12:51:20.800822] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:38.228 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.228 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:38.228 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.228 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.228 [2024-12-05 12:51:20.808809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:38.228 [2024-12-05 12:51:20.810645] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:38.228 [2024-12-05 12:51:20.810685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:38.489 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.489 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:38.489 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:38.489 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:19:38.489 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:38.489 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:38.489 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:38.489 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:38.489 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:38.489 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.489 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.489 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.489 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.489 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.489 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.489 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.489 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:38.489 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.489 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:38.489 "name": "Existed_Raid", 00:19:38.489 "uuid": "1bae32d8-2730-440f-bfb7-1a5f92e35405", 00:19:38.489 "strip_size_kb": 64, 00:19:38.489 "state": "configuring", 00:19:38.489 "raid_level": "concat", 00:19:38.489 "superblock": true, 00:19:38.489 "num_base_bdevs": 2, 00:19:38.489 "num_base_bdevs_discovered": 1, 00:19:38.489 "num_base_bdevs_operational": 2, 00:19:38.490 "base_bdevs_list": [ 00:19:38.490 { 00:19:38.490 "name": "BaseBdev1", 00:19:38.490 "uuid": "41d72035-143b-498f-93e2-f9b6b8daf34e", 00:19:38.490 "is_configured": true, 00:19:38.490 "data_offset": 2048, 00:19:38.490 "data_size": 63488 00:19:38.490 }, 00:19:38.490 { 00:19:38.490 "name": "BaseBdev2", 00:19:38.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.490 "is_configured": false, 00:19:38.490 "data_offset": 0, 00:19:38.490 "data_size": 0 00:19:38.490 } 00:19:38.490 ] 00:19:38.490 }' 00:19:38.490 12:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:38.490 12:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.760 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:38.760 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.760 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.760 [2024-12-05 12:51:21.143594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:38.760 [2024-12-05 12:51:21.143794] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:38.760 [2024-12-05 12:51:21.143811] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:38.760 [2024-12-05 12:51:21.144059] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:38.760 BaseBdev2 00:19:38.760 [2024-12-05 12:51:21.144199] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:38.760 [2024-12-05 12:51:21.144217] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:38.760 [2024-12-05 12:51:21.144348] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:38.760 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.760 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:38.760 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:38.760 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:38.760 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:38.760 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:38.760 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:38.760 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:38.760 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.760 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.760 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.760 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:38.760 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.760 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.760 [ 00:19:38.760 { 00:19:38.760 "name": "BaseBdev2", 00:19:38.760 "aliases": [ 00:19:38.760 "b120493d-5e27-44e9-a236-769795494fb5" 00:19:38.760 ], 00:19:38.760 "product_name": "Malloc disk", 00:19:38.760 "block_size": 512, 00:19:38.760 "num_blocks": 65536, 00:19:38.760 "uuid": "b120493d-5e27-44e9-a236-769795494fb5", 00:19:38.760 "assigned_rate_limits": { 00:19:38.760 "rw_ios_per_sec": 0, 00:19:38.761 "rw_mbytes_per_sec": 0, 00:19:38.761 "r_mbytes_per_sec": 0, 00:19:38.761 "w_mbytes_per_sec": 0 00:19:38.761 }, 00:19:38.761 "claimed": true, 00:19:38.761 "claim_type": "exclusive_write", 00:19:38.761 "zoned": false, 00:19:38.761 "supported_io_types": { 00:19:38.761 "read": true, 00:19:38.761 "write": true, 00:19:38.761 "unmap": true, 00:19:38.761 "flush": true, 00:19:38.761 "reset": true, 00:19:38.761 "nvme_admin": false, 00:19:38.761 "nvme_io": false, 00:19:38.761 "nvme_io_md": false, 00:19:38.761 "write_zeroes": true, 00:19:38.761 "zcopy": true, 00:19:38.761 "get_zone_info": false, 00:19:38.761 "zone_management": false, 00:19:38.761 "zone_append": false, 00:19:38.761 "compare": false, 00:19:38.761 "compare_and_write": false, 00:19:38.761 "abort": true, 00:19:38.761 "seek_hole": false, 00:19:38.761 "seek_data": false, 00:19:38.761 "copy": true, 00:19:38.761 "nvme_iov_md": false 00:19:38.761 }, 00:19:38.761 "memory_domains": [ 00:19:38.761 { 00:19:38.761 "dma_device_id": "system", 00:19:38.761 "dma_device_type": 1 00:19:38.761 }, 00:19:38.761 { 00:19:38.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:38.761 "dma_device_type": 2 00:19:38.761 } 00:19:38.761 ], 00:19:38.761 "driver_specific": {} 00:19:38.761 } 00:19:38.761 ] 00:19:38.761 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.761 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:38.761 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:38.761 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:38.761 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:19:38.761 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:38.761 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:38.761 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:38.761 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:38.761 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:38.761 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.761 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.761 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.761 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.761 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:38.761 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.761 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.761 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.761 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.761 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:38.761 "name": "Existed_Raid", 00:19:38.761 "uuid": "1bae32d8-2730-440f-bfb7-1a5f92e35405", 00:19:38.761 "strip_size_kb": 64, 00:19:38.761 "state": "online", 00:19:38.761 "raid_level": "concat", 00:19:38.761 "superblock": true, 00:19:38.761 "num_base_bdevs": 2, 00:19:38.761 "num_base_bdevs_discovered": 2, 00:19:38.761 "num_base_bdevs_operational": 2, 00:19:38.761 "base_bdevs_list": [ 00:19:38.761 { 00:19:38.761 "name": "BaseBdev1", 00:19:38.761 "uuid": "41d72035-143b-498f-93e2-f9b6b8daf34e", 00:19:38.761 "is_configured": true, 00:19:38.761 "data_offset": 2048, 00:19:38.761 "data_size": 63488 00:19:38.761 }, 00:19:38.761 { 00:19:38.761 "name": "BaseBdev2", 00:19:38.761 "uuid": "b120493d-5e27-44e9-a236-769795494fb5", 00:19:38.761 "is_configured": true, 00:19:38.761 "data_offset": 2048, 00:19:38.761 "data_size": 63488 00:19:38.761 } 00:19:38.761 ] 00:19:38.761 }' 00:19:38.761 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:38.761 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.056 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:39.056 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:39.056 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:39.056 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:39.056 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:39.056 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:39.056 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:39.056 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:39.056 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.056 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.056 [2024-12-05 12:51:21.480014] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:39.056 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.056 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:39.056 "name": "Existed_Raid", 00:19:39.056 "aliases": [ 00:19:39.056 "1bae32d8-2730-440f-bfb7-1a5f92e35405" 00:19:39.056 ], 00:19:39.056 "product_name": "Raid Volume", 00:19:39.056 "block_size": 512, 00:19:39.056 "num_blocks": 126976, 00:19:39.056 "uuid": "1bae32d8-2730-440f-bfb7-1a5f92e35405", 00:19:39.056 "assigned_rate_limits": { 00:19:39.056 "rw_ios_per_sec": 0, 00:19:39.056 "rw_mbytes_per_sec": 0, 00:19:39.056 "r_mbytes_per_sec": 0, 00:19:39.056 "w_mbytes_per_sec": 0 00:19:39.056 }, 00:19:39.056 "claimed": false, 00:19:39.056 "zoned": false, 00:19:39.056 "supported_io_types": { 00:19:39.056 "read": true, 00:19:39.056 "write": true, 00:19:39.056 "unmap": true, 00:19:39.056 "flush": true, 00:19:39.056 "reset": true, 00:19:39.056 "nvme_admin": false, 00:19:39.056 "nvme_io": false, 00:19:39.056 "nvme_io_md": false, 00:19:39.056 "write_zeroes": true, 00:19:39.056 "zcopy": false, 00:19:39.056 "get_zone_info": false, 00:19:39.056 "zone_management": false, 00:19:39.056 "zone_append": false, 00:19:39.056 "compare": false, 00:19:39.056 "compare_and_write": false, 00:19:39.056 "abort": false, 00:19:39.056 "seek_hole": false, 00:19:39.056 "seek_data": false, 00:19:39.056 "copy": false, 00:19:39.056 "nvme_iov_md": false 00:19:39.056 }, 00:19:39.056 "memory_domains": [ 00:19:39.056 { 00:19:39.056 "dma_device_id": "system", 00:19:39.056 "dma_device_type": 1 00:19:39.056 }, 00:19:39.056 { 00:19:39.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:39.056 "dma_device_type": 2 00:19:39.056 }, 00:19:39.056 { 00:19:39.056 "dma_device_id": "system", 00:19:39.056 "dma_device_type": 1 00:19:39.056 }, 00:19:39.056 { 00:19:39.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:39.056 "dma_device_type": 2 00:19:39.056 } 00:19:39.056 ], 00:19:39.056 "driver_specific": { 00:19:39.056 "raid": { 00:19:39.056 "uuid": "1bae32d8-2730-440f-bfb7-1a5f92e35405", 00:19:39.056 "strip_size_kb": 64, 00:19:39.056 "state": "online", 00:19:39.056 "raid_level": "concat", 00:19:39.056 "superblock": true, 00:19:39.056 "num_base_bdevs": 2, 00:19:39.056 "num_base_bdevs_discovered": 2, 00:19:39.056 "num_base_bdevs_operational": 2, 00:19:39.056 "base_bdevs_list": [ 00:19:39.056 { 00:19:39.056 "name": "BaseBdev1", 00:19:39.056 "uuid": "41d72035-143b-498f-93e2-f9b6b8daf34e", 00:19:39.056 "is_configured": true, 00:19:39.056 "data_offset": 2048, 00:19:39.056 "data_size": 63488 00:19:39.056 }, 00:19:39.056 { 00:19:39.056 "name": "BaseBdev2", 00:19:39.056 "uuid": "b120493d-5e27-44e9-a236-769795494fb5", 00:19:39.056 "is_configured": true, 00:19:39.056 "data_offset": 2048, 00:19:39.056 "data_size": 63488 00:19:39.056 } 00:19:39.056 ] 00:19:39.056 } 00:19:39.056 } 00:19:39.056 }' 00:19:39.056 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:39.056 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:39.056 BaseBdev2' 00:19:39.056 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:39.056 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:39.056 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:39.056 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:39.056 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:39.056 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.056 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.056 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.056 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:39.056 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:39.056 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:39.056 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:39.056 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:39.056 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.056 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.056 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.056 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:39.056 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:39.056 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:39.056 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.056 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.056 [2024-12-05 12:51:21.627797] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:39.056 [2024-12-05 12:51:21.627831] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:39.056 [2024-12-05 12:51:21.627881] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:39.318 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.318 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:39.318 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:19:39.318 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:39.318 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:19:39.318 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:19:39.318 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:19:39.318 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:39.318 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:19:39.318 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:39.318 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:39.318 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:39.318 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:39.318 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:39.318 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:39.318 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:39.318 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.318 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:39.318 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.318 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.318 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.318 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:39.318 "name": "Existed_Raid", 00:19:39.318 "uuid": "1bae32d8-2730-440f-bfb7-1a5f92e35405", 00:19:39.318 "strip_size_kb": 64, 00:19:39.318 "state": "offline", 00:19:39.318 "raid_level": "concat", 00:19:39.318 "superblock": true, 00:19:39.318 "num_base_bdevs": 2, 00:19:39.318 "num_base_bdevs_discovered": 1, 00:19:39.318 "num_base_bdevs_operational": 1, 00:19:39.318 "base_bdevs_list": [ 00:19:39.318 { 00:19:39.318 "name": null, 00:19:39.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.318 "is_configured": false, 00:19:39.318 "data_offset": 0, 00:19:39.318 "data_size": 63488 00:19:39.318 }, 00:19:39.318 { 00:19:39.318 "name": "BaseBdev2", 00:19:39.318 "uuid": "b120493d-5e27-44e9-a236-769795494fb5", 00:19:39.318 "is_configured": true, 00:19:39.318 "data_offset": 2048, 00:19:39.318 "data_size": 63488 00:19:39.318 } 00:19:39.318 ] 00:19:39.318 }' 00:19:39.318 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:39.318 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.581 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:39.581 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:39.581 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.581 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.581 12:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.581 12:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:39.581 12:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.581 12:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:39.581 12:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:39.581 12:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:39.581 12:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.581 12:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.581 [2024-12-05 12:51:22.034994] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:39.581 [2024-12-05 12:51:22.035044] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:39.581 12:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.581 12:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:39.581 12:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:39.581 12:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.581 12:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.581 12:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:39.581 12:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:39.581 12:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.581 12:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:39.581 12:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:39.581 12:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:39.581 12:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60511 00:19:39.581 12:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60511 ']' 00:19:39.581 12:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60511 00:19:39.581 12:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:19:39.581 12:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:39.581 12:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60511 00:19:39.581 12:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:39.581 12:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:39.581 12:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60511' 00:19:39.581 killing process with pid 60511 00:19:39.581 12:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60511 00:19:39.581 [2024-12-05 12:51:22.145719] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:39.581 12:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60511 00:19:39.581 [2024-12-05 12:51:22.154194] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:40.525 12:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:19:40.525 ************************************ 00:19:40.525 END TEST raid_state_function_test_sb 00:19:40.525 ************************************ 00:19:40.525 00:19:40.525 real 0m3.560s 00:19:40.525 user 0m5.178s 00:19:40.525 sys 0m0.558s 00:19:40.526 12:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:40.526 12:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.526 12:51:22 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:19:40.526 12:51:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:40.526 12:51:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:40.526 12:51:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:40.526 ************************************ 00:19:40.526 START TEST raid_superblock_test 00:19:40.526 ************************************ 00:19:40.526 12:51:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:19:40.526 12:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:19:40.526 12:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:40.526 12:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:40.526 12:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:40.526 12:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:40.526 12:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:40.526 12:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:40.526 12:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:40.526 12:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:40.526 12:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:40.526 12:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:40.526 12:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:40.526 12:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:40.526 12:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:19:40.526 12:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:19:40.526 12:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:19:40.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.526 12:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=60755 00:19:40.526 12:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 60755 00:19:40.526 12:51:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60755 ']' 00:19:40.526 12:51:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:40.526 12:51:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.526 12:51:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:40.526 12:51:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.526 12:51:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:40.526 12:51:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.526 [2024-12-05 12:51:22.840311] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:19:40.526 [2024-12-05 12:51:22.840430] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60755 ] 00:19:40.526 [2024-12-05 12:51:22.994262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.526 [2024-12-05 12:51:23.086231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.787 [2024-12-05 12:51:23.198726] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:40.787 [2024-12-05 12:51:23.198784] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:41.360 12:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:41.360 12:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:19:41.360 12:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:41.360 12:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:41.360 12:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:41.360 12:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:41.360 12:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:41.360 12:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:41.360 12:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:41.360 12:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:41.360 12:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:19:41.360 12:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.360 12:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.360 malloc1 00:19:41.360 12:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.360 12:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:41.360 12:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.360 12:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.360 [2024-12-05 12:51:23.668846] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:41.360 [2024-12-05 12:51:23.668896] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:41.360 [2024-12-05 12:51:23.668916] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:41.360 [2024-12-05 12:51:23.668924] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:41.360 [2024-12-05 12:51:23.670696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:41.360 [2024-12-05 12:51:23.670728] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:41.360 pt1 00:19:41.360 12:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.361 malloc2 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.361 [2024-12-05 12:51:23.700826] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:41.361 [2024-12-05 12:51:23.700874] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:41.361 [2024-12-05 12:51:23.700893] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:41.361 [2024-12-05 12:51:23.700901] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:41.361 [2024-12-05 12:51:23.702673] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:41.361 [2024-12-05 12:51:23.702706] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:41.361 pt2 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.361 [2024-12-05 12:51:23.708877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:41.361 [2024-12-05 12:51:23.710398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:41.361 [2024-12-05 12:51:23.710543] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:41.361 [2024-12-05 12:51:23.710558] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:41.361 [2024-12-05 12:51:23.710787] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:41.361 [2024-12-05 12:51:23.710905] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:41.361 [2024-12-05 12:51:23.710918] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:41.361 [2024-12-05 12:51:23.711036] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:41.361 "name": "raid_bdev1", 00:19:41.361 "uuid": "50bba9e4-6f5e-414d-9de2-e29532bf795d", 00:19:41.361 "strip_size_kb": 64, 00:19:41.361 "state": "online", 00:19:41.361 "raid_level": "concat", 00:19:41.361 "superblock": true, 00:19:41.361 "num_base_bdevs": 2, 00:19:41.361 "num_base_bdevs_discovered": 2, 00:19:41.361 "num_base_bdevs_operational": 2, 00:19:41.361 "base_bdevs_list": [ 00:19:41.361 { 00:19:41.361 "name": "pt1", 00:19:41.361 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:41.361 "is_configured": true, 00:19:41.361 "data_offset": 2048, 00:19:41.361 "data_size": 63488 00:19:41.361 }, 00:19:41.361 { 00:19:41.361 "name": "pt2", 00:19:41.361 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:41.361 "is_configured": true, 00:19:41.361 "data_offset": 2048, 00:19:41.361 "data_size": 63488 00:19:41.361 } 00:19:41.361 ] 00:19:41.361 }' 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:41.361 12:51:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.623 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:41.623 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:41.623 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:41.623 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:41.623 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:41.623 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:41.623 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:41.623 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:41.623 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.623 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.623 [2024-12-05 12:51:24.013150] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:41.623 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.623 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:41.623 "name": "raid_bdev1", 00:19:41.623 "aliases": [ 00:19:41.623 "50bba9e4-6f5e-414d-9de2-e29532bf795d" 00:19:41.623 ], 00:19:41.623 "product_name": "Raid Volume", 00:19:41.623 "block_size": 512, 00:19:41.623 "num_blocks": 126976, 00:19:41.623 "uuid": "50bba9e4-6f5e-414d-9de2-e29532bf795d", 00:19:41.623 "assigned_rate_limits": { 00:19:41.623 "rw_ios_per_sec": 0, 00:19:41.623 "rw_mbytes_per_sec": 0, 00:19:41.623 "r_mbytes_per_sec": 0, 00:19:41.623 "w_mbytes_per_sec": 0 00:19:41.623 }, 00:19:41.623 "claimed": false, 00:19:41.623 "zoned": false, 00:19:41.623 "supported_io_types": { 00:19:41.623 "read": true, 00:19:41.623 "write": true, 00:19:41.623 "unmap": true, 00:19:41.623 "flush": true, 00:19:41.623 "reset": true, 00:19:41.623 "nvme_admin": false, 00:19:41.623 "nvme_io": false, 00:19:41.623 "nvme_io_md": false, 00:19:41.623 "write_zeroes": true, 00:19:41.623 "zcopy": false, 00:19:41.623 "get_zone_info": false, 00:19:41.623 "zone_management": false, 00:19:41.623 "zone_append": false, 00:19:41.623 "compare": false, 00:19:41.623 "compare_and_write": false, 00:19:41.623 "abort": false, 00:19:41.623 "seek_hole": false, 00:19:41.623 "seek_data": false, 00:19:41.623 "copy": false, 00:19:41.623 "nvme_iov_md": false 00:19:41.623 }, 00:19:41.623 "memory_domains": [ 00:19:41.623 { 00:19:41.623 "dma_device_id": "system", 00:19:41.623 "dma_device_type": 1 00:19:41.623 }, 00:19:41.623 { 00:19:41.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:41.623 "dma_device_type": 2 00:19:41.623 }, 00:19:41.623 { 00:19:41.623 "dma_device_id": "system", 00:19:41.623 "dma_device_type": 1 00:19:41.623 }, 00:19:41.623 { 00:19:41.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:41.623 "dma_device_type": 2 00:19:41.623 } 00:19:41.623 ], 00:19:41.623 "driver_specific": { 00:19:41.623 "raid": { 00:19:41.623 "uuid": "50bba9e4-6f5e-414d-9de2-e29532bf795d", 00:19:41.623 "strip_size_kb": 64, 00:19:41.623 "state": "online", 00:19:41.623 "raid_level": "concat", 00:19:41.623 "superblock": true, 00:19:41.623 "num_base_bdevs": 2, 00:19:41.623 "num_base_bdevs_discovered": 2, 00:19:41.623 "num_base_bdevs_operational": 2, 00:19:41.623 "base_bdevs_list": [ 00:19:41.623 { 00:19:41.623 "name": "pt1", 00:19:41.623 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:41.623 "is_configured": true, 00:19:41.623 "data_offset": 2048, 00:19:41.623 "data_size": 63488 00:19:41.623 }, 00:19:41.623 { 00:19:41.623 "name": "pt2", 00:19:41.623 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:41.623 "is_configured": true, 00:19:41.623 "data_offset": 2048, 00:19:41.623 "data_size": 63488 00:19:41.623 } 00:19:41.623 ] 00:19:41.623 } 00:19:41.623 } 00:19:41.623 }' 00:19:41.623 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:41.623 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:41.623 pt2' 00:19:41.623 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:41.623 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:41.623 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:41.623 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:41.623 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.623 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:41.623 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.623 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.623 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:41.623 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:41.623 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:41.623 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:41.623 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:41.624 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.624 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.624 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.624 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:41.624 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:41.624 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:41.624 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:41.624 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.624 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.624 [2024-12-05 12:51:24.177174] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:41.624 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.624 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=50bba9e4-6f5e-414d-9de2-e29532bf795d 00:19:41.624 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 50bba9e4-6f5e-414d-9de2-e29532bf795d ']' 00:19:41.624 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:41.624 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.624 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.885 [2024-12-05 12:51:24.208918] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:41.885 [2024-12-05 12:51:24.208940] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:41.885 [2024-12-05 12:51:24.209006] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:41.885 [2024-12-05 12:51:24.209047] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:41.885 [2024-12-05 12:51:24.209057] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:41.885 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.885 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.885 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.885 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.885 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:41.885 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.885 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:41.885 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:41.885 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:41.885 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:41.885 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.885 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.885 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.885 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:41.885 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:41.885 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.885 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.885 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.885 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:41.885 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:41.885 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.885 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.885 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.885 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:41.885 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:41.885 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:19:41.885 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:41.885 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:41.885 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.885 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:41.885 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.885 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:41.885 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.885 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.885 [2024-12-05 12:51:24.304994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:41.885 [2024-12-05 12:51:24.306547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:41.885 [2024-12-05 12:51:24.306602] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:41.885 [2024-12-05 12:51:24.306644] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:41.885 [2024-12-05 12:51:24.306656] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:41.885 [2024-12-05 12:51:24.306665] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:41.885 request: 00:19:41.885 { 00:19:41.885 "name": "raid_bdev1", 00:19:41.885 "raid_level": "concat", 00:19:41.885 "base_bdevs": [ 00:19:41.885 "malloc1", 00:19:41.885 "malloc2" 00:19:41.886 ], 00:19:41.886 "strip_size_kb": 64, 00:19:41.886 "superblock": false, 00:19:41.886 "method": "bdev_raid_create", 00:19:41.886 "req_id": 1 00:19:41.886 } 00:19:41.886 Got JSON-RPC error response 00:19:41.886 response: 00:19:41.886 { 00:19:41.886 "code": -17, 00:19:41.886 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:41.886 } 00:19:41.886 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:41.886 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:19:41.886 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:41.886 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:41.886 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:41.886 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.886 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.886 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.886 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:41.886 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.886 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:41.886 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:41.886 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:41.886 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.886 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.886 [2024-12-05 12:51:24.348955] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:41.886 [2024-12-05 12:51:24.349000] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:41.886 [2024-12-05 12:51:24.349013] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:41.886 [2024-12-05 12:51:24.349022] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:41.886 [2024-12-05 12:51:24.350836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:41.886 [2024-12-05 12:51:24.350869] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:41.886 [2024-12-05 12:51:24.350935] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:41.886 [2024-12-05 12:51:24.350975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:41.886 pt1 00:19:41.886 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.886 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:19:41.886 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:41.886 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:41.886 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:41.886 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:41.886 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:41.886 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:41.886 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:41.886 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:41.886 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:41.886 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.886 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.886 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.886 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.886 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.886 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:41.886 "name": "raid_bdev1", 00:19:41.886 "uuid": "50bba9e4-6f5e-414d-9de2-e29532bf795d", 00:19:41.886 "strip_size_kb": 64, 00:19:41.886 "state": "configuring", 00:19:41.886 "raid_level": "concat", 00:19:41.886 "superblock": true, 00:19:41.886 "num_base_bdevs": 2, 00:19:41.886 "num_base_bdevs_discovered": 1, 00:19:41.886 "num_base_bdevs_operational": 2, 00:19:41.886 "base_bdevs_list": [ 00:19:41.886 { 00:19:41.886 "name": "pt1", 00:19:41.886 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:41.886 "is_configured": true, 00:19:41.886 "data_offset": 2048, 00:19:41.886 "data_size": 63488 00:19:41.886 }, 00:19:41.886 { 00:19:41.886 "name": null, 00:19:41.886 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:41.886 "is_configured": false, 00:19:41.886 "data_offset": 2048, 00:19:41.886 "data_size": 63488 00:19:41.886 } 00:19:41.886 ] 00:19:41.886 }' 00:19:41.886 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:41.886 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.147 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:42.147 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:42.147 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:42.148 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:42.148 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.148 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.148 [2024-12-05 12:51:24.693050] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:42.148 [2024-12-05 12:51:24.693108] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:42.148 [2024-12-05 12:51:24.693124] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:42.148 [2024-12-05 12:51:24.693134] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:42.148 [2024-12-05 12:51:24.693505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:42.148 [2024-12-05 12:51:24.693546] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:42.148 [2024-12-05 12:51:24.693609] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:42.148 [2024-12-05 12:51:24.693629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:42.148 [2024-12-05 12:51:24.693714] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:42.148 [2024-12-05 12:51:24.693732] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:42.148 [2024-12-05 12:51:24.693929] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:42.148 [2024-12-05 12:51:24.694039] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:42.148 [2024-12-05 12:51:24.694046] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:42.148 [2024-12-05 12:51:24.694147] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:42.148 pt2 00:19:42.148 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.148 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:42.148 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:42.148 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:19:42.148 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:42.148 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:42.148 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:42.148 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:42.148 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:42.148 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:42.148 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:42.148 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:42.148 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:42.148 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.148 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.148 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.148 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.148 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.148 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:42.148 "name": "raid_bdev1", 00:19:42.148 "uuid": "50bba9e4-6f5e-414d-9de2-e29532bf795d", 00:19:42.148 "strip_size_kb": 64, 00:19:42.148 "state": "online", 00:19:42.148 "raid_level": "concat", 00:19:42.148 "superblock": true, 00:19:42.148 "num_base_bdevs": 2, 00:19:42.148 "num_base_bdevs_discovered": 2, 00:19:42.148 "num_base_bdevs_operational": 2, 00:19:42.148 "base_bdevs_list": [ 00:19:42.148 { 00:19:42.148 "name": "pt1", 00:19:42.148 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:42.148 "is_configured": true, 00:19:42.148 "data_offset": 2048, 00:19:42.148 "data_size": 63488 00:19:42.148 }, 00:19:42.148 { 00:19:42.148 "name": "pt2", 00:19:42.148 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:42.148 "is_configured": true, 00:19:42.148 "data_offset": 2048, 00:19:42.148 "data_size": 63488 00:19:42.148 } 00:19:42.148 ] 00:19:42.148 }' 00:19:42.148 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:42.148 12:51:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.723 12:51:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.723 [2024-12-05 12:51:25.009324] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:42.723 "name": "raid_bdev1", 00:19:42.723 "aliases": [ 00:19:42.723 "50bba9e4-6f5e-414d-9de2-e29532bf795d" 00:19:42.723 ], 00:19:42.723 "product_name": "Raid Volume", 00:19:42.723 "block_size": 512, 00:19:42.723 "num_blocks": 126976, 00:19:42.723 "uuid": "50bba9e4-6f5e-414d-9de2-e29532bf795d", 00:19:42.723 "assigned_rate_limits": { 00:19:42.723 "rw_ios_per_sec": 0, 00:19:42.723 "rw_mbytes_per_sec": 0, 00:19:42.723 "r_mbytes_per_sec": 0, 00:19:42.723 "w_mbytes_per_sec": 0 00:19:42.723 }, 00:19:42.723 "claimed": false, 00:19:42.723 "zoned": false, 00:19:42.723 "supported_io_types": { 00:19:42.723 "read": true, 00:19:42.723 "write": true, 00:19:42.723 "unmap": true, 00:19:42.723 "flush": true, 00:19:42.723 "reset": true, 00:19:42.723 "nvme_admin": false, 00:19:42.723 "nvme_io": false, 00:19:42.723 "nvme_io_md": false, 00:19:42.723 "write_zeroes": true, 00:19:42.723 "zcopy": false, 00:19:42.723 "get_zone_info": false, 00:19:42.723 "zone_management": false, 00:19:42.723 "zone_append": false, 00:19:42.723 "compare": false, 00:19:42.723 "compare_and_write": false, 00:19:42.723 "abort": false, 00:19:42.723 "seek_hole": false, 00:19:42.723 "seek_data": false, 00:19:42.723 "copy": false, 00:19:42.723 "nvme_iov_md": false 00:19:42.723 }, 00:19:42.723 "memory_domains": [ 00:19:42.723 { 00:19:42.723 "dma_device_id": "system", 00:19:42.723 "dma_device_type": 1 00:19:42.723 }, 00:19:42.723 { 00:19:42.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:42.723 "dma_device_type": 2 00:19:42.723 }, 00:19:42.723 { 00:19:42.723 "dma_device_id": "system", 00:19:42.723 "dma_device_type": 1 00:19:42.723 }, 00:19:42.723 { 00:19:42.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:42.723 "dma_device_type": 2 00:19:42.723 } 00:19:42.723 ], 00:19:42.723 "driver_specific": { 00:19:42.723 "raid": { 00:19:42.723 "uuid": "50bba9e4-6f5e-414d-9de2-e29532bf795d", 00:19:42.723 "strip_size_kb": 64, 00:19:42.723 "state": "online", 00:19:42.723 "raid_level": "concat", 00:19:42.723 "superblock": true, 00:19:42.723 "num_base_bdevs": 2, 00:19:42.723 "num_base_bdevs_discovered": 2, 00:19:42.723 "num_base_bdevs_operational": 2, 00:19:42.723 "base_bdevs_list": [ 00:19:42.723 { 00:19:42.723 "name": "pt1", 00:19:42.723 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:42.723 "is_configured": true, 00:19:42.723 "data_offset": 2048, 00:19:42.723 "data_size": 63488 00:19:42.723 }, 00:19:42.723 { 00:19:42.723 "name": "pt2", 00:19:42.723 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:42.723 "is_configured": true, 00:19:42.723 "data_offset": 2048, 00:19:42.723 "data_size": 63488 00:19:42.723 } 00:19:42.723 ] 00:19:42.723 } 00:19:42.723 } 00:19:42.723 }' 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:42.723 pt2' 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.723 [2024-12-05 12:51:25.229358] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 50bba9e4-6f5e-414d-9de2-e29532bf795d '!=' 50bba9e4-6f5e-414d-9de2-e29532bf795d ']' 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 60755 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60755 ']' 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60755 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60755 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:42.723 killing process with pid 60755 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60755' 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 60755 00:19:42.723 [2024-12-05 12:51:25.282531] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:42.723 [2024-12-05 12:51:25.282607] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:42.723 12:51:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 60755 00:19:42.723 [2024-12-05 12:51:25.282648] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:42.723 [2024-12-05 12:51:25.282659] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:42.983 [2024-12-05 12:51:25.386025] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:43.606 12:51:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:19:43.606 00:19:43.606 real 0m3.189s 00:19:43.606 user 0m4.566s 00:19:43.606 sys 0m0.512s 00:19:43.606 12:51:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:43.606 12:51:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.606 ************************************ 00:19:43.606 END TEST raid_superblock_test 00:19:43.606 ************************************ 00:19:43.606 12:51:26 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:19:43.606 12:51:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:43.606 12:51:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:43.606 12:51:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:43.606 ************************************ 00:19:43.606 START TEST raid_read_error_test 00:19:43.606 ************************************ 00:19:43.606 12:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:19:43.606 12:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:19:43.606 12:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:19:43.606 12:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:19:43.606 12:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:19:43.606 12:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:43.606 12:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:19:43.606 12:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:43.606 12:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:43.606 12:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:19:43.606 12:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:43.606 12:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:43.606 12:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:43.606 12:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:19:43.606 12:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:19:43.606 12:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:19:43.606 12:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:19:43.606 12:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:19:43.606 12:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:19:43.606 12:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:19:43.606 12:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:19:43.606 12:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:19:43.606 12:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:19:43.606 12:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Br77PsVlL3 00:19:43.606 12:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=60951 00:19:43.606 12:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 60951 00:19:43.606 12:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 60951 ']' 00:19:43.606 12:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.606 12:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:43.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.606 12:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.606 12:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:43.606 12:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:43.606 12:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.606 [2024-12-05 12:51:26.101505] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:19:43.606 [2024-12-05 12:51:26.101623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60951 ] 00:19:43.866 [2024-12-05 12:51:26.252175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.866 [2024-12-05 12:51:26.350891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.136 [2024-12-05 12:51:26.486481] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:44.136 [2024-12-05 12:51:26.486522] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:44.514 12:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:44.514 12:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:19:44.514 12:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:44.514 12:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:44.514 12:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.514 12:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.514 BaseBdev1_malloc 00:19:44.514 12:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.514 12:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:19:44.514 12:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.514 12:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.514 true 00:19:44.514 12:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.514 12:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:44.514 12:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.515 12:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.515 [2024-12-05 12:51:26.965042] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:44.515 [2024-12-05 12:51:26.965094] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:44.515 [2024-12-05 12:51:26.965113] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:44.515 [2024-12-05 12:51:26.965124] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:44.515 [2024-12-05 12:51:26.967234] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:44.515 [2024-12-05 12:51:26.967271] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:44.515 BaseBdev1 00:19:44.515 12:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.515 12:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:44.515 12:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:44.515 12:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.515 12:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.515 BaseBdev2_malloc 00:19:44.515 12:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.515 12:51:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:19:44.515 12:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.515 12:51:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.515 true 00:19:44.515 12:51:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.515 12:51:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:44.515 12:51:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.515 12:51:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.515 [2024-12-05 12:51:27.008528] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:44.515 [2024-12-05 12:51:27.008577] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:44.515 [2024-12-05 12:51:27.008593] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:44.515 [2024-12-05 12:51:27.008603] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:44.515 [2024-12-05 12:51:27.010678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:44.515 [2024-12-05 12:51:27.010712] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:44.515 BaseBdev2 00:19:44.515 12:51:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.515 12:51:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:19:44.515 12:51:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.515 12:51:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.515 [2024-12-05 12:51:27.016591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:44.515 [2024-12-05 12:51:27.018398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:44.515 [2024-12-05 12:51:27.018587] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:44.515 [2024-12-05 12:51:27.018601] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:44.515 [2024-12-05 12:51:27.018840] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:44.515 [2024-12-05 12:51:27.018990] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:44.515 [2024-12-05 12:51:27.019001] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:44.515 [2024-12-05 12:51:27.019132] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:44.515 12:51:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.515 12:51:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:19:44.515 12:51:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:44.515 12:51:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:44.515 12:51:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:44.515 12:51:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:44.515 12:51:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:44.515 12:51:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:44.515 12:51:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:44.515 12:51:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:44.515 12:51:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:44.515 12:51:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.515 12:51:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.515 12:51:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.515 12:51:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.515 12:51:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.515 12:51:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:44.515 "name": "raid_bdev1", 00:19:44.515 "uuid": "e7172493-29a7-43e2-8b1f-85e94294e981", 00:19:44.515 "strip_size_kb": 64, 00:19:44.515 "state": "online", 00:19:44.515 "raid_level": "concat", 00:19:44.515 "superblock": true, 00:19:44.515 "num_base_bdevs": 2, 00:19:44.515 "num_base_bdevs_discovered": 2, 00:19:44.515 "num_base_bdevs_operational": 2, 00:19:44.515 "base_bdevs_list": [ 00:19:44.515 { 00:19:44.515 "name": "BaseBdev1", 00:19:44.515 "uuid": "f50dbcfb-5dc5-505d-8ac0-53bca2b75689", 00:19:44.515 "is_configured": true, 00:19:44.515 "data_offset": 2048, 00:19:44.515 "data_size": 63488 00:19:44.515 }, 00:19:44.515 { 00:19:44.515 "name": "BaseBdev2", 00:19:44.515 "uuid": "e961f940-bb7e-5a54-85b9-0a0d8f4ef6af", 00:19:44.515 "is_configured": true, 00:19:44.515 "data_offset": 2048, 00:19:44.515 "data_size": 63488 00:19:44.515 } 00:19:44.515 ] 00:19:44.515 }' 00:19:44.515 12:51:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:44.515 12:51:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.087 12:51:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:19:45.087 12:51:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:45.087 [2024-12-05 12:51:27.465662] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:46.026 12:51:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:19:46.026 12:51:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.026 12:51:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.026 12:51:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.026 12:51:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:19:46.026 12:51:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:19:46.026 12:51:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:19:46.026 12:51:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:19:46.026 12:51:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:46.026 12:51:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:46.026 12:51:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:46.026 12:51:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:46.026 12:51:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:46.026 12:51:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:46.026 12:51:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:46.026 12:51:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:46.026 12:51:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:46.026 12:51:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.027 12:51:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.027 12:51:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.027 12:51:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.027 12:51:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.027 12:51:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:46.027 "name": "raid_bdev1", 00:19:46.027 "uuid": "e7172493-29a7-43e2-8b1f-85e94294e981", 00:19:46.027 "strip_size_kb": 64, 00:19:46.027 "state": "online", 00:19:46.027 "raid_level": "concat", 00:19:46.027 "superblock": true, 00:19:46.027 "num_base_bdevs": 2, 00:19:46.027 "num_base_bdevs_discovered": 2, 00:19:46.027 "num_base_bdevs_operational": 2, 00:19:46.027 "base_bdevs_list": [ 00:19:46.027 { 00:19:46.027 "name": "BaseBdev1", 00:19:46.027 "uuid": "f50dbcfb-5dc5-505d-8ac0-53bca2b75689", 00:19:46.027 "is_configured": true, 00:19:46.027 "data_offset": 2048, 00:19:46.027 "data_size": 63488 00:19:46.027 }, 00:19:46.027 { 00:19:46.027 "name": "BaseBdev2", 00:19:46.027 "uuid": "e961f940-bb7e-5a54-85b9-0a0d8f4ef6af", 00:19:46.027 "is_configured": true, 00:19:46.027 "data_offset": 2048, 00:19:46.027 "data_size": 63488 00:19:46.027 } 00:19:46.027 ] 00:19:46.027 }' 00:19:46.027 12:51:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:46.027 12:51:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.286 12:51:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:46.286 12:51:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.286 12:51:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.286 [2024-12-05 12:51:28.739629] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:46.286 [2024-12-05 12:51:28.739780] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:46.286 [2024-12-05 12:51:28.742874] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:46.286 [2024-12-05 12:51:28.743008] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:46.286 [2024-12-05 12:51:28.743047] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:46.286 [2024-12-05 12:51:28.743058] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:46.286 { 00:19:46.286 "results": [ 00:19:46.286 { 00:19:46.286 "job": "raid_bdev1", 00:19:46.286 "core_mask": "0x1", 00:19:46.286 "workload": "randrw", 00:19:46.286 "percentage": 50, 00:19:46.286 "status": "finished", 00:19:46.286 "queue_depth": 1, 00:19:46.286 "io_size": 131072, 00:19:46.286 "runtime": 1.272338, 00:19:46.286 "iops": 14475.713214570342, 00:19:46.286 "mibps": 1809.4641518212927, 00:19:46.286 "io_failed": 1, 00:19:46.286 "io_timeout": 0, 00:19:46.286 "avg_latency_us": 94.11167565264965, 00:19:46.286 "min_latency_us": 34.067692307692305, 00:19:46.286 "max_latency_us": 1688.8123076923077 00:19:46.286 } 00:19:46.286 ], 00:19:46.286 "core_count": 1 00:19:46.286 } 00:19:46.286 12:51:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.286 12:51:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 60951 00:19:46.286 12:51:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 60951 ']' 00:19:46.286 12:51:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 60951 00:19:46.286 12:51:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:19:46.286 12:51:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:46.286 12:51:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60951 00:19:46.286 killing process with pid 60951 00:19:46.286 12:51:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:46.286 12:51:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:46.286 12:51:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60951' 00:19:46.286 12:51:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 60951 00:19:46.286 12:51:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 60951 00:19:46.286 [2024-12-05 12:51:28.770123] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:46.286 [2024-12-05 12:51:28.853254] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:47.224 12:51:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:19:47.224 12:51:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:19:47.224 12:51:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Br77PsVlL3 00:19:47.224 12:51:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.79 00:19:47.224 12:51:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:19:47.224 12:51:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:47.224 12:51:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:19:47.224 12:51:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.79 != \0\.\0\0 ]] 00:19:47.224 00:19:47.224 real 0m3.623s 00:19:47.224 user 0m4.375s 00:19:47.224 sys 0m0.390s 00:19:47.224 ************************************ 00:19:47.224 END TEST raid_read_error_test 00:19:47.224 ************************************ 00:19:47.224 12:51:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:47.224 12:51:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.224 12:51:29 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:19:47.224 12:51:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:47.224 12:51:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:47.224 12:51:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:47.224 ************************************ 00:19:47.224 START TEST raid_write_error_test 00:19:47.224 ************************************ 00:19:47.224 12:51:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:19:47.224 12:51:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:19:47.224 12:51:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:19:47.224 12:51:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:19:47.224 12:51:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:19:47.224 12:51:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:47.224 12:51:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:19:47.224 12:51:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:47.224 12:51:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:47.224 12:51:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:19:47.224 12:51:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:19:47.224 12:51:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:19:47.224 12:51:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:47.224 12:51:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:19:47.224 12:51:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:19:47.224 12:51:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:19:47.224 12:51:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:19:47.224 12:51:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:19:47.224 12:51:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:19:47.224 12:51:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:19:47.224 12:51:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:19:47.224 12:51:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:19:47.224 12:51:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:19:47.224 12:51:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.CuWaeuSpmK 00:19:47.224 12:51:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61086 00:19:47.224 12:51:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61086 00:19:47.224 12:51:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61086 ']' 00:19:47.224 12:51:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.224 12:51:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:47.224 12:51:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.224 12:51:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:47.224 12:51:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:47.224 12:51:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.224 [2024-12-05 12:51:29.742917] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:19:47.224 [2024-12-05 12:51:29.743172] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61086 ] 00:19:47.484 [2024-12-05 12:51:29.898235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.484 [2024-12-05 12:51:29.999160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.745 [2024-12-05 12:51:30.138164] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:47.745 [2024-12-05 12:51:30.138205] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:48.313 12:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:48.313 12:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:19:48.313 12:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:48.313 12:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:48.313 12:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.313 12:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.313 BaseBdev1_malloc 00:19:48.313 12:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.313 12:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:19:48.313 12:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.313 12:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.313 true 00:19:48.313 12:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.313 12:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:48.313 12:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.313 12:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.313 [2024-12-05 12:51:30.695772] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:48.313 [2024-12-05 12:51:30.695954] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:48.313 [2024-12-05 12:51:30.695982] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:48.313 [2024-12-05 12:51:30.695994] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:48.313 [2024-12-05 12:51:30.698147] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:48.313 [2024-12-05 12:51:30.698184] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:48.313 BaseBdev1 00:19:48.313 12:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.313 12:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:19:48.313 12:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:48.313 12:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.313 12:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.313 BaseBdev2_malloc 00:19:48.313 12:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.313 12:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:19:48.313 12:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.313 12:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.313 true 00:19:48.313 12:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.313 12:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:48.313 12:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.313 12:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.313 [2024-12-05 12:51:30.739575] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:48.313 [2024-12-05 12:51:30.739627] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:48.313 [2024-12-05 12:51:30.739643] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:48.313 [2024-12-05 12:51:30.739653] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:48.313 [2024-12-05 12:51:30.741751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:48.313 [2024-12-05 12:51:30.741785] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:48.313 BaseBdev2 00:19:48.314 12:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.314 12:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:19:48.314 12:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.314 12:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.314 [2024-12-05 12:51:30.747631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:48.314 [2024-12-05 12:51:30.749453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:48.314 [2024-12-05 12:51:30.749649] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:48.314 [2024-12-05 12:51:30.749664] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:48.314 [2024-12-05 12:51:30.749909] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:48.314 [2024-12-05 12:51:30.750057] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:48.314 [2024-12-05 12:51:30.750068] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:48.314 [2024-12-05 12:51:30.750209] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:48.314 12:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.314 12:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:19:48.314 12:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:48.314 12:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:48.314 12:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:48.314 12:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:48.314 12:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:48.314 12:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:48.314 12:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:48.314 12:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:48.314 12:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:48.314 12:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.314 12:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.314 12:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.314 12:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.314 12:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.314 12:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:48.314 "name": "raid_bdev1", 00:19:48.314 "uuid": "2cb00a38-a015-4143-accc-1480a49d882b", 00:19:48.314 "strip_size_kb": 64, 00:19:48.314 "state": "online", 00:19:48.314 "raid_level": "concat", 00:19:48.314 "superblock": true, 00:19:48.314 "num_base_bdevs": 2, 00:19:48.314 "num_base_bdevs_discovered": 2, 00:19:48.314 "num_base_bdevs_operational": 2, 00:19:48.314 "base_bdevs_list": [ 00:19:48.314 { 00:19:48.314 "name": "BaseBdev1", 00:19:48.314 "uuid": "cc15752f-8021-503f-9bdf-52091e1e50bc", 00:19:48.314 "is_configured": true, 00:19:48.314 "data_offset": 2048, 00:19:48.314 "data_size": 63488 00:19:48.314 }, 00:19:48.314 { 00:19:48.314 "name": "BaseBdev2", 00:19:48.314 "uuid": "ba56a6b8-1f51-524b-aaf7-600f82769959", 00:19:48.314 "is_configured": true, 00:19:48.314 "data_offset": 2048, 00:19:48.314 "data_size": 63488 00:19:48.314 } 00:19:48.314 ] 00:19:48.314 }' 00:19:48.314 12:51:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:48.314 12:51:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.575 12:51:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:19:48.575 12:51:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:48.836 [2024-12-05 12:51:31.160678] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:49.774 12:51:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:19:49.774 12:51:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.774 12:51:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.774 12:51:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.774 12:51:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:19:49.774 12:51:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:19:49.774 12:51:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:19:49.774 12:51:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:19:49.774 12:51:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:49.774 12:51:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:49.774 12:51:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:19:49.774 12:51:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:49.774 12:51:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:49.774 12:51:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:49.774 12:51:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:49.774 12:51:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:49.774 12:51:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:49.774 12:51:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.775 12:51:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.775 12:51:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.775 12:51:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.775 12:51:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.775 12:51:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:49.775 "name": "raid_bdev1", 00:19:49.775 "uuid": "2cb00a38-a015-4143-accc-1480a49d882b", 00:19:49.775 "strip_size_kb": 64, 00:19:49.775 "state": "online", 00:19:49.775 "raid_level": "concat", 00:19:49.775 "superblock": true, 00:19:49.775 "num_base_bdevs": 2, 00:19:49.775 "num_base_bdevs_discovered": 2, 00:19:49.775 "num_base_bdevs_operational": 2, 00:19:49.775 "base_bdevs_list": [ 00:19:49.775 { 00:19:49.775 "name": "BaseBdev1", 00:19:49.775 "uuid": "cc15752f-8021-503f-9bdf-52091e1e50bc", 00:19:49.775 "is_configured": true, 00:19:49.775 "data_offset": 2048, 00:19:49.775 "data_size": 63488 00:19:49.775 }, 00:19:49.775 { 00:19:49.775 "name": "BaseBdev2", 00:19:49.775 "uuid": "ba56a6b8-1f51-524b-aaf7-600f82769959", 00:19:49.775 "is_configured": true, 00:19:49.775 "data_offset": 2048, 00:19:49.775 "data_size": 63488 00:19:49.775 } 00:19:49.775 ] 00:19:49.775 }' 00:19:49.775 12:51:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:49.775 12:51:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:50.036 12:51:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:50.036 12:51:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.036 12:51:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:50.036 [2024-12-05 12:51:32.390604] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:50.036 [2024-12-05 12:51:32.390636] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:50.036 [2024-12-05 12:51:32.393724] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:50.036 [2024-12-05 12:51:32.393770] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:50.036 [2024-12-05 12:51:32.393802] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:50.036 [2024-12-05 12:51:32.393815] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:50.036 { 00:19:50.036 "results": [ 00:19:50.036 { 00:19:50.036 "job": "raid_bdev1", 00:19:50.036 "core_mask": "0x1", 00:19:50.036 "workload": "randrw", 00:19:50.036 "percentage": 50, 00:19:50.036 "status": "finished", 00:19:50.036 "queue_depth": 1, 00:19:50.036 "io_size": 131072, 00:19:50.036 "runtime": 1.228187, 00:19:50.036 "iops": 14167.223720817758, 00:19:50.036 "mibps": 1770.9029651022197, 00:19:50.036 "io_failed": 1, 00:19:50.036 "io_timeout": 0, 00:19:50.036 "avg_latency_us": 96.18235273834837, 00:19:50.036 "min_latency_us": 33.870769230769234, 00:19:50.036 "max_latency_us": 1890.4615384615386 00:19:50.036 } 00:19:50.036 ], 00:19:50.036 "core_count": 1 00:19:50.036 } 00:19:50.036 12:51:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.036 12:51:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61086 00:19:50.036 12:51:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61086 ']' 00:19:50.036 12:51:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61086 00:19:50.036 12:51:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:19:50.036 12:51:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:50.036 12:51:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61086 00:19:50.037 killing process with pid 61086 00:19:50.037 12:51:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:50.037 12:51:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:50.037 12:51:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61086' 00:19:50.037 12:51:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61086 00:19:50.037 [2024-12-05 12:51:32.417682] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:50.037 12:51:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61086 00:19:50.037 [2024-12-05 12:51:32.502797] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:50.719 12:51:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.CuWaeuSpmK 00:19:50.719 12:51:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:19:50.719 12:51:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:19:50.719 12:51:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.81 00:19:50.719 12:51:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:19:50.719 12:51:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:50.719 12:51:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:19:50.719 12:51:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.81 != \0\.\0\0 ]] 00:19:50.719 00:19:50.719 real 0m3.588s 00:19:50.719 user 0m4.338s 00:19:50.719 sys 0m0.374s 00:19:50.719 12:51:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:50.719 12:51:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:50.719 ************************************ 00:19:50.719 END TEST raid_write_error_test 00:19:50.719 ************************************ 00:19:50.719 12:51:33 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:19:50.719 12:51:33 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:19:50.719 12:51:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:50.719 12:51:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:50.719 12:51:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:50.719 ************************************ 00:19:50.719 START TEST raid_state_function_test 00:19:50.719 ************************************ 00:19:50.719 12:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:19:50.719 12:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:50.719 12:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:50.719 12:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:19:50.719 12:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:50.719 12:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:50.719 12:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:50.719 12:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:50.719 12:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:50.719 12:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:50.719 12:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:50.719 12:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:50.719 12:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:50.719 12:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:50.719 12:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:50.719 12:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:50.719 12:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:50.719 12:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:50.719 12:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:50.719 12:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:50.719 12:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:50.719 12:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:19:50.719 Process raid pid: 61218 00:19:50.719 12:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:19:50.719 12:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61218 00:19:50.719 12:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61218' 00:19:50.719 12:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61218 00:19:50.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.719 12:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61218 ']' 00:19:50.719 12:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.719 12:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:50.719 12:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.719 12:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:50.719 12:51:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:50.719 12:51:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:50.979 [2024-12-05 12:51:33.361684] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:19:50.979 [2024-12-05 12:51:33.361806] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:50.979 [2024-12-05 12:51:33.522172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.239 [2024-12-05 12:51:33.626598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.239 [2024-12-05 12:51:33.765649] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:51.239 [2024-12-05 12:51:33.765688] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:51.808 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:51.808 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:19:51.808 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:51.808 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.808 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.808 [2024-12-05 12:51:34.223604] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:51.808 [2024-12-05 12:51:34.223654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:51.808 [2024-12-05 12:51:34.223665] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:51.808 [2024-12-05 12:51:34.223676] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:51.808 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.808 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:51.808 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:51.808 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:51.808 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:51.808 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:51.808 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:51.808 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:51.808 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:51.809 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:51.809 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:51.809 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.809 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.809 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.809 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:51.809 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.809 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:51.809 "name": "Existed_Raid", 00:19:51.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.809 "strip_size_kb": 0, 00:19:51.809 "state": "configuring", 00:19:51.809 "raid_level": "raid1", 00:19:51.809 "superblock": false, 00:19:51.809 "num_base_bdevs": 2, 00:19:51.809 "num_base_bdevs_discovered": 0, 00:19:51.809 "num_base_bdevs_operational": 2, 00:19:51.809 "base_bdevs_list": [ 00:19:51.809 { 00:19:51.809 "name": "BaseBdev1", 00:19:51.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.809 "is_configured": false, 00:19:51.809 "data_offset": 0, 00:19:51.809 "data_size": 0 00:19:51.809 }, 00:19:51.809 { 00:19:51.809 "name": "BaseBdev2", 00:19:51.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.809 "is_configured": false, 00:19:51.809 "data_offset": 0, 00:19:51.809 "data_size": 0 00:19:51.809 } 00:19:51.809 ] 00:19:51.809 }' 00:19:51.809 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:51.809 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.069 [2024-12-05 12:51:34.507622] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:52.069 [2024-12-05 12:51:34.507651] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.069 [2024-12-05 12:51:34.515611] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:52.069 [2024-12-05 12:51:34.515646] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:52.069 [2024-12-05 12:51:34.515655] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:52.069 [2024-12-05 12:51:34.515666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.069 [2024-12-05 12:51:34.547760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:52.069 BaseBdev1 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.069 [ 00:19:52.069 { 00:19:52.069 "name": "BaseBdev1", 00:19:52.069 "aliases": [ 00:19:52.069 "77945022-ea98-4807-9ac7-5ee84cbd2f96" 00:19:52.069 ], 00:19:52.069 "product_name": "Malloc disk", 00:19:52.069 "block_size": 512, 00:19:52.069 "num_blocks": 65536, 00:19:52.069 "uuid": "77945022-ea98-4807-9ac7-5ee84cbd2f96", 00:19:52.069 "assigned_rate_limits": { 00:19:52.069 "rw_ios_per_sec": 0, 00:19:52.069 "rw_mbytes_per_sec": 0, 00:19:52.069 "r_mbytes_per_sec": 0, 00:19:52.069 "w_mbytes_per_sec": 0 00:19:52.069 }, 00:19:52.069 "claimed": true, 00:19:52.069 "claim_type": "exclusive_write", 00:19:52.069 "zoned": false, 00:19:52.069 "supported_io_types": { 00:19:52.069 "read": true, 00:19:52.069 "write": true, 00:19:52.069 "unmap": true, 00:19:52.069 "flush": true, 00:19:52.069 "reset": true, 00:19:52.069 "nvme_admin": false, 00:19:52.069 "nvme_io": false, 00:19:52.069 "nvme_io_md": false, 00:19:52.069 "write_zeroes": true, 00:19:52.069 "zcopy": true, 00:19:52.069 "get_zone_info": false, 00:19:52.069 "zone_management": false, 00:19:52.069 "zone_append": false, 00:19:52.069 "compare": false, 00:19:52.069 "compare_and_write": false, 00:19:52.069 "abort": true, 00:19:52.069 "seek_hole": false, 00:19:52.069 "seek_data": false, 00:19:52.069 "copy": true, 00:19:52.069 "nvme_iov_md": false 00:19:52.069 }, 00:19:52.069 "memory_domains": [ 00:19:52.069 { 00:19:52.069 "dma_device_id": "system", 00:19:52.069 "dma_device_type": 1 00:19:52.069 }, 00:19:52.069 { 00:19:52.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:52.069 "dma_device_type": 2 00:19:52.069 } 00:19:52.069 ], 00:19:52.069 "driver_specific": {} 00:19:52.069 } 00:19:52.069 ] 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.069 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.070 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.070 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:52.070 "name": "Existed_Raid", 00:19:52.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.070 "strip_size_kb": 0, 00:19:52.070 "state": "configuring", 00:19:52.070 "raid_level": "raid1", 00:19:52.070 "superblock": false, 00:19:52.070 "num_base_bdevs": 2, 00:19:52.070 "num_base_bdevs_discovered": 1, 00:19:52.070 "num_base_bdevs_operational": 2, 00:19:52.070 "base_bdevs_list": [ 00:19:52.070 { 00:19:52.070 "name": "BaseBdev1", 00:19:52.070 "uuid": "77945022-ea98-4807-9ac7-5ee84cbd2f96", 00:19:52.070 "is_configured": true, 00:19:52.070 "data_offset": 0, 00:19:52.070 "data_size": 65536 00:19:52.070 }, 00:19:52.070 { 00:19:52.070 "name": "BaseBdev2", 00:19:52.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.070 "is_configured": false, 00:19:52.070 "data_offset": 0, 00:19:52.070 "data_size": 0 00:19:52.070 } 00:19:52.070 ] 00:19:52.070 }' 00:19:52.070 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:52.070 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.331 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:52.331 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.331 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.331 [2024-12-05 12:51:34.887873] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:52.331 [2024-12-05 12:51:34.887915] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:52.331 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.331 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:52.331 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.331 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.331 [2024-12-05 12:51:34.895919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:52.331 [2024-12-05 12:51:34.897766] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:52.331 [2024-12-05 12:51:34.897911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:52.331 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.331 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:52.331 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:52.331 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:52.331 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:52.331 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:52.331 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:52.331 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:52.331 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:52.331 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:52.331 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:52.331 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:52.331 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:52.331 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.331 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:52.331 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.331 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.657 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.657 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:52.657 "name": "Existed_Raid", 00:19:52.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.657 "strip_size_kb": 0, 00:19:52.657 "state": "configuring", 00:19:52.657 "raid_level": "raid1", 00:19:52.657 "superblock": false, 00:19:52.657 "num_base_bdevs": 2, 00:19:52.657 "num_base_bdevs_discovered": 1, 00:19:52.657 "num_base_bdevs_operational": 2, 00:19:52.657 "base_bdevs_list": [ 00:19:52.657 { 00:19:52.657 "name": "BaseBdev1", 00:19:52.657 "uuid": "77945022-ea98-4807-9ac7-5ee84cbd2f96", 00:19:52.657 "is_configured": true, 00:19:52.657 "data_offset": 0, 00:19:52.657 "data_size": 65536 00:19:52.657 }, 00:19:52.657 { 00:19:52.657 "name": "BaseBdev2", 00:19:52.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.657 "is_configured": false, 00:19:52.657 "data_offset": 0, 00:19:52.657 "data_size": 0 00:19:52.657 } 00:19:52.657 ] 00:19:52.657 }' 00:19:52.657 12:51:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:52.657 12:51:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.657 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:52.657 12:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.657 12:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.918 [2024-12-05 12:51:35.242598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:52.918 [2024-12-05 12:51:35.242650] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:52.918 [2024-12-05 12:51:35.242657] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:52.918 [2024-12-05 12:51:35.242911] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:52.918 [2024-12-05 12:51:35.243057] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:52.918 [2024-12-05 12:51:35.243067] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:52.918 [2024-12-05 12:51:35.243297] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:52.918 BaseBdev2 00:19:52.918 12:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.918 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:52.918 12:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:52.918 12:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:52.919 12:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:52.919 12:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:52.919 12:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:52.919 12:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:52.919 12:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.919 12:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.919 12:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.919 12:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:52.919 12:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.919 12:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.919 [ 00:19:52.919 { 00:19:52.919 "name": "BaseBdev2", 00:19:52.919 "aliases": [ 00:19:52.919 "f93b09d6-6253-439a-bf9c-71398fc07640" 00:19:52.919 ], 00:19:52.919 "product_name": "Malloc disk", 00:19:52.919 "block_size": 512, 00:19:52.919 "num_blocks": 65536, 00:19:52.919 "uuid": "f93b09d6-6253-439a-bf9c-71398fc07640", 00:19:52.919 "assigned_rate_limits": { 00:19:52.919 "rw_ios_per_sec": 0, 00:19:52.919 "rw_mbytes_per_sec": 0, 00:19:52.919 "r_mbytes_per_sec": 0, 00:19:52.919 "w_mbytes_per_sec": 0 00:19:52.919 }, 00:19:52.919 "claimed": true, 00:19:52.919 "claim_type": "exclusive_write", 00:19:52.919 "zoned": false, 00:19:52.919 "supported_io_types": { 00:19:52.919 "read": true, 00:19:52.919 "write": true, 00:19:52.919 "unmap": true, 00:19:52.919 "flush": true, 00:19:52.919 "reset": true, 00:19:52.919 "nvme_admin": false, 00:19:52.919 "nvme_io": false, 00:19:52.919 "nvme_io_md": false, 00:19:52.919 "write_zeroes": true, 00:19:52.919 "zcopy": true, 00:19:52.919 "get_zone_info": false, 00:19:52.919 "zone_management": false, 00:19:52.919 "zone_append": false, 00:19:52.919 "compare": false, 00:19:52.919 "compare_and_write": false, 00:19:52.919 "abort": true, 00:19:52.919 "seek_hole": false, 00:19:52.919 "seek_data": false, 00:19:52.919 "copy": true, 00:19:52.919 "nvme_iov_md": false 00:19:52.919 }, 00:19:52.919 "memory_domains": [ 00:19:52.919 { 00:19:52.919 "dma_device_id": "system", 00:19:52.919 "dma_device_type": 1 00:19:52.919 }, 00:19:52.919 { 00:19:52.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:52.919 "dma_device_type": 2 00:19:52.919 } 00:19:52.919 ], 00:19:52.919 "driver_specific": {} 00:19:52.919 } 00:19:52.919 ] 00:19:52.919 12:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.919 12:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:52.919 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:52.919 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:52.919 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:52.919 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:52.919 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:52.919 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:52.919 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:52.919 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:52.919 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:52.919 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:52.919 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:52.919 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:52.919 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.919 12:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.919 12:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.919 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:52.919 12:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.919 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:52.919 "name": "Existed_Raid", 00:19:52.919 "uuid": "e2d50edc-d741-4be7-82f2-018c07f6bd1d", 00:19:52.919 "strip_size_kb": 0, 00:19:52.919 "state": "online", 00:19:52.919 "raid_level": "raid1", 00:19:52.919 "superblock": false, 00:19:52.919 "num_base_bdevs": 2, 00:19:52.919 "num_base_bdevs_discovered": 2, 00:19:52.919 "num_base_bdevs_operational": 2, 00:19:52.919 "base_bdevs_list": [ 00:19:52.919 { 00:19:52.919 "name": "BaseBdev1", 00:19:52.919 "uuid": "77945022-ea98-4807-9ac7-5ee84cbd2f96", 00:19:52.919 "is_configured": true, 00:19:52.919 "data_offset": 0, 00:19:52.919 "data_size": 65536 00:19:52.919 }, 00:19:52.919 { 00:19:52.919 "name": "BaseBdev2", 00:19:52.919 "uuid": "f93b09d6-6253-439a-bf9c-71398fc07640", 00:19:52.919 "is_configured": true, 00:19:52.919 "data_offset": 0, 00:19:52.919 "data_size": 65536 00:19:52.919 } 00:19:52.919 ] 00:19:52.919 }' 00:19:52.919 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:52.919 12:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.180 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:53.180 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:53.180 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:53.180 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:53.180 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:53.180 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:53.180 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:53.180 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:53.180 12:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.180 12:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.180 [2024-12-05 12:51:35.566919] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:53.180 12:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.180 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:53.180 "name": "Existed_Raid", 00:19:53.180 "aliases": [ 00:19:53.180 "e2d50edc-d741-4be7-82f2-018c07f6bd1d" 00:19:53.180 ], 00:19:53.180 "product_name": "Raid Volume", 00:19:53.180 "block_size": 512, 00:19:53.180 "num_blocks": 65536, 00:19:53.180 "uuid": "e2d50edc-d741-4be7-82f2-018c07f6bd1d", 00:19:53.180 "assigned_rate_limits": { 00:19:53.180 "rw_ios_per_sec": 0, 00:19:53.180 "rw_mbytes_per_sec": 0, 00:19:53.180 "r_mbytes_per_sec": 0, 00:19:53.180 "w_mbytes_per_sec": 0 00:19:53.180 }, 00:19:53.180 "claimed": false, 00:19:53.180 "zoned": false, 00:19:53.180 "supported_io_types": { 00:19:53.180 "read": true, 00:19:53.180 "write": true, 00:19:53.180 "unmap": false, 00:19:53.180 "flush": false, 00:19:53.180 "reset": true, 00:19:53.180 "nvme_admin": false, 00:19:53.180 "nvme_io": false, 00:19:53.180 "nvme_io_md": false, 00:19:53.180 "write_zeroes": true, 00:19:53.180 "zcopy": false, 00:19:53.180 "get_zone_info": false, 00:19:53.180 "zone_management": false, 00:19:53.180 "zone_append": false, 00:19:53.180 "compare": false, 00:19:53.180 "compare_and_write": false, 00:19:53.180 "abort": false, 00:19:53.180 "seek_hole": false, 00:19:53.180 "seek_data": false, 00:19:53.180 "copy": false, 00:19:53.180 "nvme_iov_md": false 00:19:53.180 }, 00:19:53.180 "memory_domains": [ 00:19:53.180 { 00:19:53.180 "dma_device_id": "system", 00:19:53.180 "dma_device_type": 1 00:19:53.180 }, 00:19:53.180 { 00:19:53.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:53.180 "dma_device_type": 2 00:19:53.180 }, 00:19:53.180 { 00:19:53.180 "dma_device_id": "system", 00:19:53.180 "dma_device_type": 1 00:19:53.180 }, 00:19:53.180 { 00:19:53.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:53.180 "dma_device_type": 2 00:19:53.180 } 00:19:53.180 ], 00:19:53.180 "driver_specific": { 00:19:53.180 "raid": { 00:19:53.180 "uuid": "e2d50edc-d741-4be7-82f2-018c07f6bd1d", 00:19:53.180 "strip_size_kb": 0, 00:19:53.180 "state": "online", 00:19:53.180 "raid_level": "raid1", 00:19:53.180 "superblock": false, 00:19:53.180 "num_base_bdevs": 2, 00:19:53.180 "num_base_bdevs_discovered": 2, 00:19:53.180 "num_base_bdevs_operational": 2, 00:19:53.180 "base_bdevs_list": [ 00:19:53.180 { 00:19:53.180 "name": "BaseBdev1", 00:19:53.180 "uuid": "77945022-ea98-4807-9ac7-5ee84cbd2f96", 00:19:53.180 "is_configured": true, 00:19:53.180 "data_offset": 0, 00:19:53.180 "data_size": 65536 00:19:53.180 }, 00:19:53.180 { 00:19:53.180 "name": "BaseBdev2", 00:19:53.180 "uuid": "f93b09d6-6253-439a-bf9c-71398fc07640", 00:19:53.180 "is_configured": true, 00:19:53.180 "data_offset": 0, 00:19:53.180 "data_size": 65536 00:19:53.180 } 00:19:53.180 ] 00:19:53.180 } 00:19:53.180 } 00:19:53.180 }' 00:19:53.180 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:53.180 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:53.180 BaseBdev2' 00:19:53.180 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:53.180 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:53.180 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:53.180 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:53.180 12:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.180 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:53.180 12:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.180 12:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.180 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:53.180 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:53.180 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:53.180 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:53.180 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:53.180 12:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.180 12:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.180 12:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.180 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:53.180 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:53.180 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:53.180 12:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.180 12:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.180 [2024-12-05 12:51:35.718753] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:53.440 12:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.440 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:53.440 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:53.440 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:53.440 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:53.440 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:53.440 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:53.440 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:53.440 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:53.440 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:53.440 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:53.440 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:53.440 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:53.440 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:53.440 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:53.440 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:53.440 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.440 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:53.440 12:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.440 12:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.440 12:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.440 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:53.440 "name": "Existed_Raid", 00:19:53.440 "uuid": "e2d50edc-d741-4be7-82f2-018c07f6bd1d", 00:19:53.440 "strip_size_kb": 0, 00:19:53.440 "state": "online", 00:19:53.440 "raid_level": "raid1", 00:19:53.440 "superblock": false, 00:19:53.440 "num_base_bdevs": 2, 00:19:53.440 "num_base_bdevs_discovered": 1, 00:19:53.441 "num_base_bdevs_operational": 1, 00:19:53.441 "base_bdevs_list": [ 00:19:53.441 { 00:19:53.441 "name": null, 00:19:53.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.441 "is_configured": false, 00:19:53.441 "data_offset": 0, 00:19:53.441 "data_size": 65536 00:19:53.441 }, 00:19:53.441 { 00:19:53.441 "name": "BaseBdev2", 00:19:53.441 "uuid": "f93b09d6-6253-439a-bf9c-71398fc07640", 00:19:53.441 "is_configured": true, 00:19:53.441 "data_offset": 0, 00:19:53.441 "data_size": 65536 00:19:53.441 } 00:19:53.441 ] 00:19:53.441 }' 00:19:53.441 12:51:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:53.441 12:51:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.700 12:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:53.700 12:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:53.700 12:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:53.700 12:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.700 12:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.700 12:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.700 12:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.700 12:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:53.700 12:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:53.700 12:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:53.700 12:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.700 12:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.700 [2024-12-05 12:51:36.101541] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:53.700 [2024-12-05 12:51:36.101615] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:53.700 [2024-12-05 12:51:36.148336] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:53.700 [2024-12-05 12:51:36.148375] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:53.700 [2024-12-05 12:51:36.148384] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:53.700 12:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.700 12:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:53.700 12:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:53.700 12:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.700 12:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:53.700 12:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.700 12:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.700 12:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.700 12:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:53.700 12:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:53.700 12:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:53.700 12:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61218 00:19:53.700 12:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61218 ']' 00:19:53.700 12:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61218 00:19:53.700 12:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:19:53.700 12:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:53.700 12:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61218 00:19:53.700 12:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:53.700 12:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:53.700 killing process with pid 61218 00:19:53.700 12:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61218' 00:19:53.700 12:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61218 00:19:53.700 [2024-12-05 12:51:36.210335] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:53.700 12:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61218 00:19:53.700 [2024-12-05 12:51:36.218689] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:54.269 ************************************ 00:19:54.269 END TEST raid_state_function_test 00:19:54.269 ************************************ 00:19:54.269 12:51:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:19:54.269 00:19:54.269 real 0m3.497s 00:19:54.269 user 0m5.129s 00:19:54.269 sys 0m0.527s 00:19:54.269 12:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:54.269 12:51:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.269 12:51:36 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:19:54.269 12:51:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:54.269 12:51:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:54.269 12:51:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:54.269 ************************************ 00:19:54.269 START TEST raid_state_function_test_sb 00:19:54.269 ************************************ 00:19:54.269 12:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:19:54.269 12:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:54.269 12:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:54.269 12:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:54.269 12:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:54.269 12:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:54.269 12:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:54.269 12:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:54.269 12:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:54.269 12:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:54.269 12:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:54.269 12:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:54.269 12:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:54.269 12:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:54.269 12:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:54.269 12:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:54.269 12:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:54.269 12:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:54.269 12:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:54.269 12:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:54.269 12:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:54.269 12:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:54.269 12:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:54.269 12:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61455 00:19:54.269 Process raid pid: 61455 00:19:54.269 12:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61455' 00:19:54.269 12:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61455 00:19:54.269 12:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61455 ']' 00:19:54.269 12:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.269 12:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:54.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.269 12:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.269 12:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:54.270 12:51:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.270 12:51:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:54.529 [2024-12-05 12:51:36.898128] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:19:54.529 [2024-12-05 12:51:36.898254] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:54.529 [2024-12-05 12:51:37.051569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.787 [2024-12-05 12:51:37.134730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.787 [2024-12-05 12:51:37.245020] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:54.787 [2024-12-05 12:51:37.245055] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:55.360 12:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:55.360 12:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:19:55.360 12:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:55.360 12:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.360 12:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.360 [2024-12-05 12:51:37.741213] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:55.360 [2024-12-05 12:51:37.741267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:55.360 [2024-12-05 12:51:37.741276] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:55.360 [2024-12-05 12:51:37.741284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:55.360 12:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.360 12:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:55.360 12:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:55.360 12:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:55.360 12:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:55.360 12:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:55.360 12:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:55.360 12:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:55.360 12:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:55.360 12:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:55.360 12:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:55.360 12:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.360 12:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:55.360 12:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.360 12:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.360 12:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.360 12:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:55.360 "name": "Existed_Raid", 00:19:55.360 "uuid": "debd6e00-7d99-46ef-b793-d6e938edd464", 00:19:55.360 "strip_size_kb": 0, 00:19:55.360 "state": "configuring", 00:19:55.360 "raid_level": "raid1", 00:19:55.361 "superblock": true, 00:19:55.361 "num_base_bdevs": 2, 00:19:55.361 "num_base_bdevs_discovered": 0, 00:19:55.361 "num_base_bdevs_operational": 2, 00:19:55.361 "base_bdevs_list": [ 00:19:55.361 { 00:19:55.361 "name": "BaseBdev1", 00:19:55.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.361 "is_configured": false, 00:19:55.361 "data_offset": 0, 00:19:55.361 "data_size": 0 00:19:55.361 }, 00:19:55.361 { 00:19:55.361 "name": "BaseBdev2", 00:19:55.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.361 "is_configured": false, 00:19:55.361 "data_offset": 0, 00:19:55.361 "data_size": 0 00:19:55.361 } 00:19:55.361 ] 00:19:55.361 }' 00:19:55.361 12:51:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:55.361 12:51:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.623 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:55.623 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.624 [2024-12-05 12:51:38.061218] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:55.624 [2024-12-05 12:51:38.061249] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.624 [2024-12-05 12:51:38.069215] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:55.624 [2024-12-05 12:51:38.069248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:55.624 [2024-12-05 12:51:38.069254] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:55.624 [2024-12-05 12:51:38.069263] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.624 [2024-12-05 12:51:38.097358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:55.624 BaseBdev1 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.624 [ 00:19:55.624 { 00:19:55.624 "name": "BaseBdev1", 00:19:55.624 "aliases": [ 00:19:55.624 "a333ee13-f091-412a-be66-ebcd20401419" 00:19:55.624 ], 00:19:55.624 "product_name": "Malloc disk", 00:19:55.624 "block_size": 512, 00:19:55.624 "num_blocks": 65536, 00:19:55.624 "uuid": "a333ee13-f091-412a-be66-ebcd20401419", 00:19:55.624 "assigned_rate_limits": { 00:19:55.624 "rw_ios_per_sec": 0, 00:19:55.624 "rw_mbytes_per_sec": 0, 00:19:55.624 "r_mbytes_per_sec": 0, 00:19:55.624 "w_mbytes_per_sec": 0 00:19:55.624 }, 00:19:55.624 "claimed": true, 00:19:55.624 "claim_type": "exclusive_write", 00:19:55.624 "zoned": false, 00:19:55.624 "supported_io_types": { 00:19:55.624 "read": true, 00:19:55.624 "write": true, 00:19:55.624 "unmap": true, 00:19:55.624 "flush": true, 00:19:55.624 "reset": true, 00:19:55.624 "nvme_admin": false, 00:19:55.624 "nvme_io": false, 00:19:55.624 "nvme_io_md": false, 00:19:55.624 "write_zeroes": true, 00:19:55.624 "zcopy": true, 00:19:55.624 "get_zone_info": false, 00:19:55.624 "zone_management": false, 00:19:55.624 "zone_append": false, 00:19:55.624 "compare": false, 00:19:55.624 "compare_and_write": false, 00:19:55.624 "abort": true, 00:19:55.624 "seek_hole": false, 00:19:55.624 "seek_data": false, 00:19:55.624 "copy": true, 00:19:55.624 "nvme_iov_md": false 00:19:55.624 }, 00:19:55.624 "memory_domains": [ 00:19:55.624 { 00:19:55.624 "dma_device_id": "system", 00:19:55.624 "dma_device_type": 1 00:19:55.624 }, 00:19:55.624 { 00:19:55.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:55.624 "dma_device_type": 2 00:19:55.624 } 00:19:55.624 ], 00:19:55.624 "driver_specific": {} 00:19:55.624 } 00:19:55.624 ] 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:55.624 "name": "Existed_Raid", 00:19:55.624 "uuid": "e67979aa-118a-4092-bc6d-51ff70ea256c", 00:19:55.624 "strip_size_kb": 0, 00:19:55.624 "state": "configuring", 00:19:55.624 "raid_level": "raid1", 00:19:55.624 "superblock": true, 00:19:55.624 "num_base_bdevs": 2, 00:19:55.624 "num_base_bdevs_discovered": 1, 00:19:55.624 "num_base_bdevs_operational": 2, 00:19:55.624 "base_bdevs_list": [ 00:19:55.624 { 00:19:55.624 "name": "BaseBdev1", 00:19:55.624 "uuid": "a333ee13-f091-412a-be66-ebcd20401419", 00:19:55.624 "is_configured": true, 00:19:55.624 "data_offset": 2048, 00:19:55.624 "data_size": 63488 00:19:55.624 }, 00:19:55.624 { 00:19:55.624 "name": "BaseBdev2", 00:19:55.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.624 "is_configured": false, 00:19:55.624 "data_offset": 0, 00:19:55.624 "data_size": 0 00:19:55.624 } 00:19:55.624 ] 00:19:55.624 }' 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:55.624 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.886 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:55.886 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.886 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.886 [2024-12-05 12:51:38.437463] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:55.886 [2024-12-05 12:51:38.437515] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:55.886 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.886 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:55.886 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.886 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.886 [2024-12-05 12:51:38.445505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:55.886 [2024-12-05 12:51:38.447043] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:55.886 [2024-12-05 12:51:38.447080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:55.886 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.886 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:55.886 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:55.886 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:55.886 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:55.886 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:55.886 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:55.886 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:55.886 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:55.886 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:55.886 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:55.886 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:55.886 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:55.886 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.886 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.887 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.887 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:55.887 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.146 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:56.146 "name": "Existed_Raid", 00:19:56.146 "uuid": "d270dfd0-a653-47a1-b723-53d98ea120f9", 00:19:56.146 "strip_size_kb": 0, 00:19:56.146 "state": "configuring", 00:19:56.146 "raid_level": "raid1", 00:19:56.146 "superblock": true, 00:19:56.146 "num_base_bdevs": 2, 00:19:56.146 "num_base_bdevs_discovered": 1, 00:19:56.146 "num_base_bdevs_operational": 2, 00:19:56.146 "base_bdevs_list": [ 00:19:56.146 { 00:19:56.146 "name": "BaseBdev1", 00:19:56.146 "uuid": "a333ee13-f091-412a-be66-ebcd20401419", 00:19:56.146 "is_configured": true, 00:19:56.146 "data_offset": 2048, 00:19:56.146 "data_size": 63488 00:19:56.146 }, 00:19:56.146 { 00:19:56.146 "name": "BaseBdev2", 00:19:56.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.146 "is_configured": false, 00:19:56.146 "data_offset": 0, 00:19:56.146 "data_size": 0 00:19:56.146 } 00:19:56.146 ] 00:19:56.146 }' 00:19:56.146 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:56.146 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.407 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:56.407 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.407 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.407 [2024-12-05 12:51:38.764414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:56.407 [2024-12-05 12:51:38.764627] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:56.407 [2024-12-05 12:51:38.764638] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:56.407 BaseBdev2 00:19:56.407 [2024-12-05 12:51:38.764853] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:56.407 [2024-12-05 12:51:38.764972] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:56.407 [2024-12-05 12:51:38.764982] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:56.407 [2024-12-05 12:51:38.765087] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:56.407 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.407 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:56.407 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:56.407 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:56.407 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:56.407 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:56.407 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:56.407 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:56.407 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.407 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.407 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.407 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:56.407 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.407 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.407 [ 00:19:56.407 { 00:19:56.407 "name": "BaseBdev2", 00:19:56.407 "aliases": [ 00:19:56.407 "3a3fb93b-51eb-4d9f-ba3c-8c956a912800" 00:19:56.407 ], 00:19:56.407 "product_name": "Malloc disk", 00:19:56.407 "block_size": 512, 00:19:56.407 "num_blocks": 65536, 00:19:56.407 "uuid": "3a3fb93b-51eb-4d9f-ba3c-8c956a912800", 00:19:56.407 "assigned_rate_limits": { 00:19:56.407 "rw_ios_per_sec": 0, 00:19:56.407 "rw_mbytes_per_sec": 0, 00:19:56.407 "r_mbytes_per_sec": 0, 00:19:56.407 "w_mbytes_per_sec": 0 00:19:56.407 }, 00:19:56.407 "claimed": true, 00:19:56.407 "claim_type": "exclusive_write", 00:19:56.407 "zoned": false, 00:19:56.407 "supported_io_types": { 00:19:56.407 "read": true, 00:19:56.407 "write": true, 00:19:56.407 "unmap": true, 00:19:56.407 "flush": true, 00:19:56.407 "reset": true, 00:19:56.407 "nvme_admin": false, 00:19:56.407 "nvme_io": false, 00:19:56.407 "nvme_io_md": false, 00:19:56.407 "write_zeroes": true, 00:19:56.407 "zcopy": true, 00:19:56.407 "get_zone_info": false, 00:19:56.407 "zone_management": false, 00:19:56.407 "zone_append": false, 00:19:56.407 "compare": false, 00:19:56.407 "compare_and_write": false, 00:19:56.407 "abort": true, 00:19:56.407 "seek_hole": false, 00:19:56.407 "seek_data": false, 00:19:56.407 "copy": true, 00:19:56.407 "nvme_iov_md": false 00:19:56.407 }, 00:19:56.407 "memory_domains": [ 00:19:56.407 { 00:19:56.407 "dma_device_id": "system", 00:19:56.407 "dma_device_type": 1 00:19:56.407 }, 00:19:56.407 { 00:19:56.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:56.407 "dma_device_type": 2 00:19:56.407 } 00:19:56.407 ], 00:19:56.407 "driver_specific": {} 00:19:56.407 } 00:19:56.407 ] 00:19:56.407 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.407 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:56.407 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:56.407 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:56.407 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:56.407 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:56.407 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:56.407 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:56.407 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:56.407 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:56.408 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:56.408 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:56.408 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:56.408 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:56.408 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.408 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:56.408 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.408 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.408 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.408 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:56.408 "name": "Existed_Raid", 00:19:56.408 "uuid": "d270dfd0-a653-47a1-b723-53d98ea120f9", 00:19:56.408 "strip_size_kb": 0, 00:19:56.408 "state": "online", 00:19:56.408 "raid_level": "raid1", 00:19:56.408 "superblock": true, 00:19:56.408 "num_base_bdevs": 2, 00:19:56.408 "num_base_bdevs_discovered": 2, 00:19:56.408 "num_base_bdevs_operational": 2, 00:19:56.408 "base_bdevs_list": [ 00:19:56.408 { 00:19:56.408 "name": "BaseBdev1", 00:19:56.408 "uuid": "a333ee13-f091-412a-be66-ebcd20401419", 00:19:56.408 "is_configured": true, 00:19:56.408 "data_offset": 2048, 00:19:56.408 "data_size": 63488 00:19:56.408 }, 00:19:56.408 { 00:19:56.408 "name": "BaseBdev2", 00:19:56.408 "uuid": "3a3fb93b-51eb-4d9f-ba3c-8c956a912800", 00:19:56.408 "is_configured": true, 00:19:56.408 "data_offset": 2048, 00:19:56.408 "data_size": 63488 00:19:56.408 } 00:19:56.408 ] 00:19:56.408 }' 00:19:56.408 12:51:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:56.408 12:51:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.669 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:56.669 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:56.669 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:56.669 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:56.669 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:56.669 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:56.670 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:56.670 12:51:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.670 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:56.670 12:51:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.670 [2024-12-05 12:51:39.084759] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:56.670 12:51:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.670 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:56.670 "name": "Existed_Raid", 00:19:56.670 "aliases": [ 00:19:56.670 "d270dfd0-a653-47a1-b723-53d98ea120f9" 00:19:56.670 ], 00:19:56.670 "product_name": "Raid Volume", 00:19:56.670 "block_size": 512, 00:19:56.670 "num_blocks": 63488, 00:19:56.670 "uuid": "d270dfd0-a653-47a1-b723-53d98ea120f9", 00:19:56.670 "assigned_rate_limits": { 00:19:56.670 "rw_ios_per_sec": 0, 00:19:56.670 "rw_mbytes_per_sec": 0, 00:19:56.670 "r_mbytes_per_sec": 0, 00:19:56.670 "w_mbytes_per_sec": 0 00:19:56.670 }, 00:19:56.670 "claimed": false, 00:19:56.670 "zoned": false, 00:19:56.670 "supported_io_types": { 00:19:56.670 "read": true, 00:19:56.670 "write": true, 00:19:56.670 "unmap": false, 00:19:56.670 "flush": false, 00:19:56.670 "reset": true, 00:19:56.670 "nvme_admin": false, 00:19:56.670 "nvme_io": false, 00:19:56.670 "nvme_io_md": false, 00:19:56.670 "write_zeroes": true, 00:19:56.670 "zcopy": false, 00:19:56.670 "get_zone_info": false, 00:19:56.670 "zone_management": false, 00:19:56.670 "zone_append": false, 00:19:56.670 "compare": false, 00:19:56.670 "compare_and_write": false, 00:19:56.670 "abort": false, 00:19:56.670 "seek_hole": false, 00:19:56.670 "seek_data": false, 00:19:56.670 "copy": false, 00:19:56.670 "nvme_iov_md": false 00:19:56.670 }, 00:19:56.670 "memory_domains": [ 00:19:56.670 { 00:19:56.670 "dma_device_id": "system", 00:19:56.670 "dma_device_type": 1 00:19:56.670 }, 00:19:56.670 { 00:19:56.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:56.670 "dma_device_type": 2 00:19:56.670 }, 00:19:56.670 { 00:19:56.670 "dma_device_id": "system", 00:19:56.670 "dma_device_type": 1 00:19:56.670 }, 00:19:56.670 { 00:19:56.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:56.670 "dma_device_type": 2 00:19:56.670 } 00:19:56.670 ], 00:19:56.670 "driver_specific": { 00:19:56.670 "raid": { 00:19:56.670 "uuid": "d270dfd0-a653-47a1-b723-53d98ea120f9", 00:19:56.670 "strip_size_kb": 0, 00:19:56.670 "state": "online", 00:19:56.670 "raid_level": "raid1", 00:19:56.670 "superblock": true, 00:19:56.670 "num_base_bdevs": 2, 00:19:56.670 "num_base_bdevs_discovered": 2, 00:19:56.670 "num_base_bdevs_operational": 2, 00:19:56.670 "base_bdevs_list": [ 00:19:56.670 { 00:19:56.670 "name": "BaseBdev1", 00:19:56.670 "uuid": "a333ee13-f091-412a-be66-ebcd20401419", 00:19:56.670 "is_configured": true, 00:19:56.670 "data_offset": 2048, 00:19:56.670 "data_size": 63488 00:19:56.670 }, 00:19:56.670 { 00:19:56.670 "name": "BaseBdev2", 00:19:56.670 "uuid": "3a3fb93b-51eb-4d9f-ba3c-8c956a912800", 00:19:56.670 "is_configured": true, 00:19:56.670 "data_offset": 2048, 00:19:56.670 "data_size": 63488 00:19:56.670 } 00:19:56.670 ] 00:19:56.670 } 00:19:56.670 } 00:19:56.670 }' 00:19:56.670 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:56.670 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:56.670 BaseBdev2' 00:19:56.670 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:56.670 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:56.670 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:56.670 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:56.670 12:51:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.670 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:56.670 12:51:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.670 12:51:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.670 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:56.670 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:56.670 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:56.670 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:56.670 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:56.670 12:51:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.670 12:51:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.670 12:51:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.670 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:56.670 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:56.670 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:56.670 12:51:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.670 12:51:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.670 [2024-12-05 12:51:39.232600] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:56.998 12:51:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.998 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:56.998 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:56.998 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:56.998 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:19:56.998 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:56.998 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:56.998 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:56.998 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:56.998 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:56.998 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:56.998 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:56.998 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:56.998 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:56.998 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:56.998 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:56.998 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.998 12:51:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.998 12:51:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.998 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:56.998 12:51:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.998 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:56.998 "name": "Existed_Raid", 00:19:56.998 "uuid": "d270dfd0-a653-47a1-b723-53d98ea120f9", 00:19:56.998 "strip_size_kb": 0, 00:19:56.998 "state": "online", 00:19:56.998 "raid_level": "raid1", 00:19:56.998 "superblock": true, 00:19:56.998 "num_base_bdevs": 2, 00:19:56.998 "num_base_bdevs_discovered": 1, 00:19:56.998 "num_base_bdevs_operational": 1, 00:19:56.998 "base_bdevs_list": [ 00:19:56.998 { 00:19:56.998 "name": null, 00:19:56.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.998 "is_configured": false, 00:19:56.998 "data_offset": 0, 00:19:56.998 "data_size": 63488 00:19:56.998 }, 00:19:56.998 { 00:19:56.998 "name": "BaseBdev2", 00:19:56.998 "uuid": "3a3fb93b-51eb-4d9f-ba3c-8c956a912800", 00:19:56.998 "is_configured": true, 00:19:56.998 "data_offset": 2048, 00:19:56.998 "data_size": 63488 00:19:56.998 } 00:19:56.998 ] 00:19:56.998 }' 00:19:56.998 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:56.998 12:51:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.272 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:57.272 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:57.272 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:57.272 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.272 12:51:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.272 12:51:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.272 12:51:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.272 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:57.272 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:57.272 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:57.272 12:51:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.272 12:51:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.272 [2024-12-05 12:51:39.641161] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:57.272 [2024-12-05 12:51:39.641247] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:57.272 [2024-12-05 12:51:39.689146] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:57.272 [2024-12-05 12:51:39.689196] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:57.272 [2024-12-05 12:51:39.689206] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:57.272 12:51:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.272 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:57.272 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:57.272 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.272 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:57.272 12:51:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.272 12:51:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.272 12:51:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.272 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:57.272 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:57.272 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:57.272 12:51:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61455 00:19:57.272 12:51:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61455 ']' 00:19:57.272 12:51:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61455 00:19:57.272 12:51:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:19:57.272 12:51:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:57.272 12:51:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61455 00:19:57.272 12:51:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:57.272 12:51:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:57.272 killing process with pid 61455 00:19:57.272 12:51:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61455' 00:19:57.272 12:51:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61455 00:19:57.272 [2024-12-05 12:51:39.747899] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:57.272 12:51:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61455 00:19:57.272 [2024-12-05 12:51:39.756330] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:57.844 12:51:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:19:57.844 00:19:57.844 real 0m3.508s 00:19:57.844 user 0m5.191s 00:19:57.844 sys 0m0.485s 00:19:57.844 12:51:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:57.844 ************************************ 00:19:57.844 END TEST raid_state_function_test_sb 00:19:57.844 ************************************ 00:19:57.844 12:51:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.844 12:51:40 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:19:57.844 12:51:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:57.844 12:51:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:57.844 12:51:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:57.844 ************************************ 00:19:57.844 START TEST raid_superblock_test 00:19:57.844 ************************************ 00:19:57.844 12:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:19:57.844 12:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:57.844 12:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:57.844 12:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:57.844 12:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:57.844 12:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:57.844 12:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:57.844 12:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:57.844 12:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:57.844 12:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:57.844 12:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:57.844 12:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:57.844 12:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:57.844 12:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:57.845 12:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:57.845 12:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:57.845 12:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61686 00:19:57.845 12:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61686 00:19:57.845 12:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61686 ']' 00:19:57.845 12:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.845 12:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:57.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.845 12:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.845 12:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:57.845 12:51:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.845 12:51:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:58.104 [2024-12-05 12:51:40.436306] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:19:58.104 [2024-12-05 12:51:40.436410] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61686 ] 00:19:58.104 [2024-12-05 12:51:40.586109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.104 [2024-12-05 12:51:40.671381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.364 [2024-12-05 12:51:40.780214] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:58.364 [2024-12-05 12:51:40.780247] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:58.949 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:58.949 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:19:58.949 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:58.949 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:58.949 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:58.949 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:58.949 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:58.949 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:58.949 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:58.949 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:58.949 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:19:58.949 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.949 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.949 malloc1 00:19:58.949 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.949 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:58.949 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.949 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.949 [2024-12-05 12:51:41.329856] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:58.949 [2024-12-05 12:51:41.329915] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:58.949 [2024-12-05 12:51:41.329933] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:58.949 [2024-12-05 12:51:41.329941] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:58.949 [2024-12-05 12:51:41.331761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:58.949 [2024-12-05 12:51:41.331796] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:58.949 pt1 00:19:58.949 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.949 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:58.949 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:58.949 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:58.949 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:58.949 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:58.949 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:58.949 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:58.949 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:58.949 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:19:58.949 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.949 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.949 malloc2 00:19:58.949 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.949 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:58.949 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.949 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.949 [2024-12-05 12:51:41.365952] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:58.949 [2024-12-05 12:51:41.366002] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:58.950 [2024-12-05 12:51:41.366022] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:58.950 [2024-12-05 12:51:41.366029] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:58.950 [2024-12-05 12:51:41.367827] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:58.950 [2024-12-05 12:51:41.367858] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:58.950 pt2 00:19:58.950 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.950 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:58.950 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:58.950 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:58.950 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.950 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.950 [2024-12-05 12:51:41.374003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:58.950 [2024-12-05 12:51:41.375537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:58.950 [2024-12-05 12:51:41.375671] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:58.950 [2024-12-05 12:51:41.375684] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:58.950 [2024-12-05 12:51:41.375902] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:58.950 [2024-12-05 12:51:41.376027] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:58.950 [2024-12-05 12:51:41.376038] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:58.950 [2024-12-05 12:51:41.376160] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:58.950 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.950 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:58.950 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:58.950 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:58.950 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:58.950 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:58.950 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:58.950 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:58.950 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:58.950 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:58.950 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:58.950 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.950 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.950 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.950 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.950 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.950 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:58.950 "name": "raid_bdev1", 00:19:58.950 "uuid": "57580a36-8fc2-44cc-ae32-9825db1dd1b6", 00:19:58.950 "strip_size_kb": 0, 00:19:58.950 "state": "online", 00:19:58.950 "raid_level": "raid1", 00:19:58.950 "superblock": true, 00:19:58.950 "num_base_bdevs": 2, 00:19:58.950 "num_base_bdevs_discovered": 2, 00:19:58.950 "num_base_bdevs_operational": 2, 00:19:58.950 "base_bdevs_list": [ 00:19:58.950 { 00:19:58.950 "name": "pt1", 00:19:58.950 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:58.950 "is_configured": true, 00:19:58.950 "data_offset": 2048, 00:19:58.950 "data_size": 63488 00:19:58.950 }, 00:19:58.950 { 00:19:58.950 "name": "pt2", 00:19:58.950 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:58.950 "is_configured": true, 00:19:58.950 "data_offset": 2048, 00:19:58.950 "data_size": 63488 00:19:58.950 } 00:19:58.950 ] 00:19:58.950 }' 00:19:58.950 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:58.950 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.212 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:59.212 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:59.212 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:59.212 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:59.212 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:59.212 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:59.212 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:59.212 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:59.212 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.212 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.212 [2024-12-05 12:51:41.694271] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:59.212 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.212 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:59.212 "name": "raid_bdev1", 00:19:59.212 "aliases": [ 00:19:59.212 "57580a36-8fc2-44cc-ae32-9825db1dd1b6" 00:19:59.212 ], 00:19:59.212 "product_name": "Raid Volume", 00:19:59.212 "block_size": 512, 00:19:59.212 "num_blocks": 63488, 00:19:59.212 "uuid": "57580a36-8fc2-44cc-ae32-9825db1dd1b6", 00:19:59.212 "assigned_rate_limits": { 00:19:59.212 "rw_ios_per_sec": 0, 00:19:59.212 "rw_mbytes_per_sec": 0, 00:19:59.212 "r_mbytes_per_sec": 0, 00:19:59.212 "w_mbytes_per_sec": 0 00:19:59.212 }, 00:19:59.212 "claimed": false, 00:19:59.212 "zoned": false, 00:19:59.212 "supported_io_types": { 00:19:59.212 "read": true, 00:19:59.212 "write": true, 00:19:59.212 "unmap": false, 00:19:59.212 "flush": false, 00:19:59.212 "reset": true, 00:19:59.212 "nvme_admin": false, 00:19:59.212 "nvme_io": false, 00:19:59.212 "nvme_io_md": false, 00:19:59.212 "write_zeroes": true, 00:19:59.212 "zcopy": false, 00:19:59.212 "get_zone_info": false, 00:19:59.212 "zone_management": false, 00:19:59.212 "zone_append": false, 00:19:59.212 "compare": false, 00:19:59.212 "compare_and_write": false, 00:19:59.212 "abort": false, 00:19:59.212 "seek_hole": false, 00:19:59.212 "seek_data": false, 00:19:59.212 "copy": false, 00:19:59.212 "nvme_iov_md": false 00:19:59.212 }, 00:19:59.212 "memory_domains": [ 00:19:59.212 { 00:19:59.212 "dma_device_id": "system", 00:19:59.212 "dma_device_type": 1 00:19:59.212 }, 00:19:59.212 { 00:19:59.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:59.212 "dma_device_type": 2 00:19:59.212 }, 00:19:59.212 { 00:19:59.212 "dma_device_id": "system", 00:19:59.212 "dma_device_type": 1 00:19:59.212 }, 00:19:59.212 { 00:19:59.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:59.212 "dma_device_type": 2 00:19:59.212 } 00:19:59.212 ], 00:19:59.212 "driver_specific": { 00:19:59.212 "raid": { 00:19:59.212 "uuid": "57580a36-8fc2-44cc-ae32-9825db1dd1b6", 00:19:59.212 "strip_size_kb": 0, 00:19:59.212 "state": "online", 00:19:59.212 "raid_level": "raid1", 00:19:59.212 "superblock": true, 00:19:59.212 "num_base_bdevs": 2, 00:19:59.212 "num_base_bdevs_discovered": 2, 00:19:59.212 "num_base_bdevs_operational": 2, 00:19:59.212 "base_bdevs_list": [ 00:19:59.212 { 00:19:59.212 "name": "pt1", 00:19:59.212 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:59.212 "is_configured": true, 00:19:59.212 "data_offset": 2048, 00:19:59.212 "data_size": 63488 00:19:59.212 }, 00:19:59.212 { 00:19:59.212 "name": "pt2", 00:19:59.212 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:59.212 "is_configured": true, 00:19:59.212 "data_offset": 2048, 00:19:59.212 "data_size": 63488 00:19:59.212 } 00:19:59.212 ] 00:19:59.212 } 00:19:59.212 } 00:19:59.212 }' 00:19:59.212 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:59.212 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:59.212 pt2' 00:19:59.212 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:59.212 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:59.212 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:59.212 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:59.212 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.212 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.212 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:59.212 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.474 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:59.474 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:59.474 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:59.474 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:59.474 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.474 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.474 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:59.474 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.474 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:59.474 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:59.474 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:59.474 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:59.474 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.474 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.474 [2024-12-05 12:51:41.858287] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:59.474 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.474 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=57580a36-8fc2-44cc-ae32-9825db1dd1b6 00:19:59.474 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 57580a36-8fc2-44cc-ae32-9825db1dd1b6 ']' 00:19:59.474 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:59.474 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.474 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.474 [2024-12-05 12:51:41.886034] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:59.475 [2024-12-05 12:51:41.886055] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:59.475 [2024-12-05 12:51:41.886118] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:59.475 [2024-12-05 12:51:41.886168] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:59.475 [2024-12-05 12:51:41.886177] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.475 [2024-12-05 12:51:41.986088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:59.475 [2024-12-05 12:51:41.987671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:59.475 [2024-12-05 12:51:41.987730] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:59.475 [2024-12-05 12:51:41.987773] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:59.475 [2024-12-05 12:51:41.987785] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:59.475 [2024-12-05 12:51:41.987793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:59.475 request: 00:19:59.475 { 00:19:59.475 "name": "raid_bdev1", 00:19:59.475 "raid_level": "raid1", 00:19:59.475 "base_bdevs": [ 00:19:59.475 "malloc1", 00:19:59.475 "malloc2" 00:19:59.475 ], 00:19:59.475 "superblock": false, 00:19:59.475 "method": "bdev_raid_create", 00:19:59.475 "req_id": 1 00:19:59.475 } 00:19:59.475 Got JSON-RPC error response 00:19:59.475 response: 00:19:59.475 { 00:19:59.475 "code": -17, 00:19:59.475 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:59.475 } 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.475 12:51:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:59.475 12:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.475 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:59.475 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:59.475 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:59.475 12:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.475 12:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.475 [2024-12-05 12:51:42.026082] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:59.475 [2024-12-05 12:51:42.026128] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:59.475 [2024-12-05 12:51:42.026145] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:59.475 [2024-12-05 12:51:42.026154] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:59.475 [2024-12-05 12:51:42.027981] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:59.475 [2024-12-05 12:51:42.028013] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:59.475 [2024-12-05 12:51:42.028079] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:59.475 [2024-12-05 12:51:42.028120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:59.475 pt1 00:19:59.475 12:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.475 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:59.475 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:59.475 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:59.475 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:59.475 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:59.475 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:59.475 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:59.476 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:59.476 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:59.476 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:59.476 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.476 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.476 12:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.476 12:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.476 12:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.736 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:59.736 "name": "raid_bdev1", 00:19:59.736 "uuid": "57580a36-8fc2-44cc-ae32-9825db1dd1b6", 00:19:59.736 "strip_size_kb": 0, 00:19:59.736 "state": "configuring", 00:19:59.736 "raid_level": "raid1", 00:19:59.736 "superblock": true, 00:19:59.736 "num_base_bdevs": 2, 00:19:59.736 "num_base_bdevs_discovered": 1, 00:19:59.736 "num_base_bdevs_operational": 2, 00:19:59.736 "base_bdevs_list": [ 00:19:59.736 { 00:19:59.736 "name": "pt1", 00:19:59.736 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:59.736 "is_configured": true, 00:19:59.736 "data_offset": 2048, 00:19:59.736 "data_size": 63488 00:19:59.736 }, 00:19:59.736 { 00:19:59.736 "name": null, 00:19:59.736 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:59.736 "is_configured": false, 00:19:59.736 "data_offset": 2048, 00:19:59.736 "data_size": 63488 00:19:59.736 } 00:19:59.736 ] 00:19:59.736 }' 00:19:59.736 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:59.736 12:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.997 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:59.997 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:59.997 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:59.997 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:59.997 12:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.997 12:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.997 [2024-12-05 12:51:42.350157] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:59.997 [2024-12-05 12:51:42.350209] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:59.997 [2024-12-05 12:51:42.350226] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:59.997 [2024-12-05 12:51:42.350235] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:59.997 [2024-12-05 12:51:42.350586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:59.997 [2024-12-05 12:51:42.350600] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:59.997 [2024-12-05 12:51:42.350659] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:59.997 [2024-12-05 12:51:42.350679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:59.997 [2024-12-05 12:51:42.350767] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:59.997 [2024-12-05 12:51:42.350776] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:59.998 [2024-12-05 12:51:42.350966] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:59.998 [2024-12-05 12:51:42.351114] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:59.998 [2024-12-05 12:51:42.351123] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:59.998 [2024-12-05 12:51:42.351229] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:59.998 pt2 00:19:59.998 12:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.998 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:59.998 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:59.998 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:59.998 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:59.998 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:59.998 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:59.998 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:59.998 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:59.998 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:59.998 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:59.998 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:59.998 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:59.998 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.998 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.998 12:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.998 12:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.998 12:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.998 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:59.998 "name": "raid_bdev1", 00:19:59.998 "uuid": "57580a36-8fc2-44cc-ae32-9825db1dd1b6", 00:19:59.998 "strip_size_kb": 0, 00:19:59.998 "state": "online", 00:19:59.998 "raid_level": "raid1", 00:19:59.998 "superblock": true, 00:19:59.998 "num_base_bdevs": 2, 00:19:59.998 "num_base_bdevs_discovered": 2, 00:19:59.998 "num_base_bdevs_operational": 2, 00:19:59.998 "base_bdevs_list": [ 00:19:59.998 { 00:19:59.998 "name": "pt1", 00:19:59.998 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:59.998 "is_configured": true, 00:19:59.998 "data_offset": 2048, 00:19:59.998 "data_size": 63488 00:19:59.998 }, 00:19:59.998 { 00:19:59.998 "name": "pt2", 00:19:59.998 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:59.998 "is_configured": true, 00:19:59.998 "data_offset": 2048, 00:19:59.998 "data_size": 63488 00:19:59.998 } 00:19:59.998 ] 00:19:59.998 }' 00:19:59.998 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:59.998 12:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.259 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:00.259 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:00.259 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:00.259 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:00.259 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:00.259 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:00.259 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:00.259 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:00.259 12:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.259 12:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.259 [2024-12-05 12:51:42.650420] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:00.259 12:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.259 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:00.259 "name": "raid_bdev1", 00:20:00.259 "aliases": [ 00:20:00.259 "57580a36-8fc2-44cc-ae32-9825db1dd1b6" 00:20:00.259 ], 00:20:00.259 "product_name": "Raid Volume", 00:20:00.259 "block_size": 512, 00:20:00.259 "num_blocks": 63488, 00:20:00.259 "uuid": "57580a36-8fc2-44cc-ae32-9825db1dd1b6", 00:20:00.259 "assigned_rate_limits": { 00:20:00.259 "rw_ios_per_sec": 0, 00:20:00.259 "rw_mbytes_per_sec": 0, 00:20:00.259 "r_mbytes_per_sec": 0, 00:20:00.259 "w_mbytes_per_sec": 0 00:20:00.259 }, 00:20:00.259 "claimed": false, 00:20:00.259 "zoned": false, 00:20:00.259 "supported_io_types": { 00:20:00.259 "read": true, 00:20:00.259 "write": true, 00:20:00.259 "unmap": false, 00:20:00.259 "flush": false, 00:20:00.259 "reset": true, 00:20:00.259 "nvme_admin": false, 00:20:00.259 "nvme_io": false, 00:20:00.259 "nvme_io_md": false, 00:20:00.259 "write_zeroes": true, 00:20:00.259 "zcopy": false, 00:20:00.259 "get_zone_info": false, 00:20:00.259 "zone_management": false, 00:20:00.259 "zone_append": false, 00:20:00.259 "compare": false, 00:20:00.259 "compare_and_write": false, 00:20:00.259 "abort": false, 00:20:00.259 "seek_hole": false, 00:20:00.259 "seek_data": false, 00:20:00.259 "copy": false, 00:20:00.259 "nvme_iov_md": false 00:20:00.259 }, 00:20:00.259 "memory_domains": [ 00:20:00.259 { 00:20:00.259 "dma_device_id": "system", 00:20:00.259 "dma_device_type": 1 00:20:00.259 }, 00:20:00.259 { 00:20:00.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:00.259 "dma_device_type": 2 00:20:00.259 }, 00:20:00.259 { 00:20:00.259 "dma_device_id": "system", 00:20:00.259 "dma_device_type": 1 00:20:00.259 }, 00:20:00.259 { 00:20:00.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:00.259 "dma_device_type": 2 00:20:00.259 } 00:20:00.259 ], 00:20:00.259 "driver_specific": { 00:20:00.259 "raid": { 00:20:00.259 "uuid": "57580a36-8fc2-44cc-ae32-9825db1dd1b6", 00:20:00.259 "strip_size_kb": 0, 00:20:00.259 "state": "online", 00:20:00.259 "raid_level": "raid1", 00:20:00.259 "superblock": true, 00:20:00.259 "num_base_bdevs": 2, 00:20:00.259 "num_base_bdevs_discovered": 2, 00:20:00.259 "num_base_bdevs_operational": 2, 00:20:00.259 "base_bdevs_list": [ 00:20:00.259 { 00:20:00.259 "name": "pt1", 00:20:00.259 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:00.259 "is_configured": true, 00:20:00.259 "data_offset": 2048, 00:20:00.259 "data_size": 63488 00:20:00.259 }, 00:20:00.259 { 00:20:00.259 "name": "pt2", 00:20:00.259 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:00.259 "is_configured": true, 00:20:00.259 "data_offset": 2048, 00:20:00.259 "data_size": 63488 00:20:00.259 } 00:20:00.259 ] 00:20:00.259 } 00:20:00.259 } 00:20:00.259 }' 00:20:00.259 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:00.259 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:00.259 pt2' 00:20:00.259 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:00.259 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:00.259 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:00.259 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:00.259 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:00.259 12:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.259 12:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.259 12:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.259 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:00.259 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:00.259 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:00.259 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:00.259 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:00.259 12:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.259 12:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.259 12:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.259 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:00.259 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:00.259 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:00.259 12:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.259 12:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.259 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:00.259 [2024-12-05 12:51:42.814437] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:00.259 12:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.534 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 57580a36-8fc2-44cc-ae32-9825db1dd1b6 '!=' 57580a36-8fc2-44cc-ae32-9825db1dd1b6 ']' 00:20:00.534 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:20:00.534 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:00.534 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:20:00.534 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:00.534 12:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.534 12:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.534 [2024-12-05 12:51:42.850266] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:00.534 12:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.534 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:00.534 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:00.534 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:00.534 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:00.534 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:00.534 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:00.534 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:00.534 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:00.534 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:00.534 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:00.534 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.534 12:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.534 12:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.534 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.534 12:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.534 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:00.534 "name": "raid_bdev1", 00:20:00.534 "uuid": "57580a36-8fc2-44cc-ae32-9825db1dd1b6", 00:20:00.534 "strip_size_kb": 0, 00:20:00.534 "state": "online", 00:20:00.534 "raid_level": "raid1", 00:20:00.534 "superblock": true, 00:20:00.534 "num_base_bdevs": 2, 00:20:00.534 "num_base_bdevs_discovered": 1, 00:20:00.534 "num_base_bdevs_operational": 1, 00:20:00.534 "base_bdevs_list": [ 00:20:00.534 { 00:20:00.534 "name": null, 00:20:00.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.534 "is_configured": false, 00:20:00.534 "data_offset": 0, 00:20:00.534 "data_size": 63488 00:20:00.534 }, 00:20:00.534 { 00:20:00.534 "name": "pt2", 00:20:00.534 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:00.534 "is_configured": true, 00:20:00.534 "data_offset": 2048, 00:20:00.534 "data_size": 63488 00:20:00.534 } 00:20:00.534 ] 00:20:00.534 }' 00:20:00.534 12:51:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:00.534 12:51:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.795 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:00.795 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.795 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.795 [2024-12-05 12:51:43.162309] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:00.795 [2024-12-05 12:51:43.162430] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:00.795 [2024-12-05 12:51:43.162507] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:00.795 [2024-12-05 12:51:43.162546] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:00.796 [2024-12-05 12:51:43.162555] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.796 [2024-12-05 12:51:43.214299] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:00.796 [2024-12-05 12:51:43.214342] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:00.796 [2024-12-05 12:51:43.214355] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:00.796 [2024-12-05 12:51:43.214364] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:00.796 [2024-12-05 12:51:43.216187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:00.796 [2024-12-05 12:51:43.216220] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:00.796 [2024-12-05 12:51:43.216277] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:00.796 [2024-12-05 12:51:43.216311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:00.796 [2024-12-05 12:51:43.216384] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:00.796 [2024-12-05 12:51:43.216394] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:00.796 [2024-12-05 12:51:43.216596] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:00.796 [2024-12-05 12:51:43.216706] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:00.796 [2024-12-05 12:51:43.216740] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:00.796 [2024-12-05 12:51:43.216848] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:00.796 pt2 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:00.796 "name": "raid_bdev1", 00:20:00.796 "uuid": "57580a36-8fc2-44cc-ae32-9825db1dd1b6", 00:20:00.796 "strip_size_kb": 0, 00:20:00.796 "state": "online", 00:20:00.796 "raid_level": "raid1", 00:20:00.796 "superblock": true, 00:20:00.796 "num_base_bdevs": 2, 00:20:00.796 "num_base_bdevs_discovered": 1, 00:20:00.796 "num_base_bdevs_operational": 1, 00:20:00.796 "base_bdevs_list": [ 00:20:00.796 { 00:20:00.796 "name": null, 00:20:00.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.796 "is_configured": false, 00:20:00.796 "data_offset": 2048, 00:20:00.796 "data_size": 63488 00:20:00.796 }, 00:20:00.796 { 00:20:00.796 "name": "pt2", 00:20:00.796 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:00.796 "is_configured": true, 00:20:00.796 "data_offset": 2048, 00:20:00.796 "data_size": 63488 00:20:00.796 } 00:20:00.796 ] 00:20:00.796 }' 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:00.796 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.056 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:01.056 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.056 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.056 [2024-12-05 12:51:43.534343] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:01.056 [2024-12-05 12:51:43.534365] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:01.056 [2024-12-05 12:51:43.534415] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:01.056 [2024-12-05 12:51:43.534456] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:01.056 [2024-12-05 12:51:43.534464] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:01.056 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.056 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.056 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.056 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:01.056 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.056 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.056 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:01.056 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:01.056 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:20:01.056 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:01.056 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.056 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.056 [2024-12-05 12:51:43.574369] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:01.056 [2024-12-05 12:51:43.574417] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:01.056 [2024-12-05 12:51:43.574433] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:01.056 [2024-12-05 12:51:43.574440] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:01.056 [2024-12-05 12:51:43.576278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:01.056 [2024-12-05 12:51:43.576395] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:01.056 [2024-12-05 12:51:43.576471] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:01.056 [2024-12-05 12:51:43.576523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:01.056 [2024-12-05 12:51:43.576631] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:01.056 [2024-12-05 12:51:43.576639] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:01.056 [2024-12-05 12:51:43.576651] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:01.056 [2024-12-05 12:51:43.576687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:01.056 [2024-12-05 12:51:43.576743] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:01.056 [2024-12-05 12:51:43.576750] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:01.056 [2024-12-05 12:51:43.576952] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:01.056 [2024-12-05 12:51:43.577057] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:01.056 [2024-12-05 12:51:43.577065] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:01.056 [2024-12-05 12:51:43.577171] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:01.056 pt1 00:20:01.056 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.056 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:20:01.056 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:01.056 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:01.056 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:01.056 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:01.056 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:01.056 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:01.056 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:01.056 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:01.056 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:01.056 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:01.056 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.056 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.056 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.056 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.056 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.056 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:01.056 "name": "raid_bdev1", 00:20:01.057 "uuid": "57580a36-8fc2-44cc-ae32-9825db1dd1b6", 00:20:01.057 "strip_size_kb": 0, 00:20:01.057 "state": "online", 00:20:01.057 "raid_level": "raid1", 00:20:01.057 "superblock": true, 00:20:01.057 "num_base_bdevs": 2, 00:20:01.057 "num_base_bdevs_discovered": 1, 00:20:01.057 "num_base_bdevs_operational": 1, 00:20:01.057 "base_bdevs_list": [ 00:20:01.057 { 00:20:01.057 "name": null, 00:20:01.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.057 "is_configured": false, 00:20:01.057 "data_offset": 2048, 00:20:01.057 "data_size": 63488 00:20:01.057 }, 00:20:01.057 { 00:20:01.057 "name": "pt2", 00:20:01.057 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:01.057 "is_configured": true, 00:20:01.057 "data_offset": 2048, 00:20:01.057 "data_size": 63488 00:20:01.057 } 00:20:01.057 ] 00:20:01.057 }' 00:20:01.057 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:01.057 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.623 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:01.623 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.623 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:01.623 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.623 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.623 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:01.623 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:01.623 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.623 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:01.623 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.623 [2024-12-05 12:51:43.954638] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:01.623 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.623 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 57580a36-8fc2-44cc-ae32-9825db1dd1b6 '!=' 57580a36-8fc2-44cc-ae32-9825db1dd1b6 ']' 00:20:01.623 12:51:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61686 00:20:01.623 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61686 ']' 00:20:01.623 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61686 00:20:01.623 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:20:01.623 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:01.623 12:51:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61686 00:20:01.623 12:51:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:01.623 12:51:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:01.623 killing process with pid 61686 00:20:01.623 12:51:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61686' 00:20:01.623 12:51:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61686 00:20:01.623 [2024-12-05 12:51:44.008266] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:01.623 [2024-12-05 12:51:44.008331] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:01.623 12:51:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61686 00:20:01.623 [2024-12-05 12:51:44.008367] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:01.623 [2024-12-05 12:51:44.008380] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:01.623 [2024-12-05 12:51:44.107850] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:02.192 12:51:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:20:02.192 00:20:02.192 real 0m4.296s 00:20:02.192 user 0m6.642s 00:20:02.192 sys 0m0.664s 00:20:02.192 ************************************ 00:20:02.192 END TEST raid_superblock_test 00:20:02.192 ************************************ 00:20:02.192 12:51:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:02.192 12:51:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.192 12:51:44 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:20:02.192 12:51:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:02.192 12:51:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:02.192 12:51:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:02.192 ************************************ 00:20:02.192 START TEST raid_read_error_test 00:20:02.192 ************************************ 00:20:02.192 12:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:20:02.192 12:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:20:02.192 12:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:20:02.192 12:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:20:02.192 12:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:20:02.192 12:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:02.192 12:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:20:02.192 12:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:02.192 12:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:02.192 12:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:20:02.193 12:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:02.193 12:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:02.193 12:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:02.193 12:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:20:02.193 12:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:20:02.193 12:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:20:02.193 12:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:20:02.193 12:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:20:02.193 12:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:20:02.193 12:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:20:02.193 12:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:20:02.193 12:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:20:02.193 12:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.j1EYALOq0W 00:20:02.193 12:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61999 00:20:02.193 12:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61999 00:20:02.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.193 12:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61999 ']' 00:20:02.193 12:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.193 12:51:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:02.193 12:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:02.193 12:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.193 12:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:02.193 12:51:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.451 [2024-12-05 12:51:44.788306] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:20:02.452 [2024-12-05 12:51:44.788424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61999 ] 00:20:02.452 [2024-12-05 12:51:44.947992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.718 [2024-12-05 12:51:45.047971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.718 [2024-12-05 12:51:45.183941] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:02.718 [2024-12-05 12:51:45.183997] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:03.291 12:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:03.291 12:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:20:03.291 12:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:03.291 12:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:03.291 12:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.291 12:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.291 BaseBdev1_malloc 00:20:03.291 12:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.291 12:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:20:03.291 12:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.291 12:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.291 true 00:20:03.291 12:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.291 12:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:03.291 12:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.291 12:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.291 [2024-12-05 12:51:45.741079] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:03.291 [2024-12-05 12:51:45.741136] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:03.291 [2024-12-05 12:51:45.741157] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:20:03.291 [2024-12-05 12:51:45.741168] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:03.291 [2024-12-05 12:51:45.743329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:03.291 [2024-12-05 12:51:45.743505] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:03.291 BaseBdev1 00:20:03.291 12:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.291 12:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:03.291 12:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:03.291 12:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.291 12:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.291 BaseBdev2_malloc 00:20:03.291 12:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.291 12:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:20:03.291 12:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.291 12:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.291 true 00:20:03.291 12:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.291 12:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:03.291 12:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.291 12:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.291 [2024-12-05 12:51:45.785159] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:03.291 [2024-12-05 12:51:45.785214] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:03.291 [2024-12-05 12:51:45.785232] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:20:03.291 [2024-12-05 12:51:45.785243] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:03.292 [2024-12-05 12:51:45.787450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:03.292 [2024-12-05 12:51:45.787617] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:03.292 BaseBdev2 00:20:03.292 12:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.292 12:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:20:03.292 12:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.292 12:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.292 [2024-12-05 12:51:45.793222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:03.292 [2024-12-05 12:51:45.795152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:03.292 [2024-12-05 12:51:45.795510] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:03.292 [2024-12-05 12:51:45.795530] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:03.292 [2024-12-05 12:51:45.795787] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:03.292 [2024-12-05 12:51:45.795985] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:03.292 [2024-12-05 12:51:45.795996] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:03.292 [2024-12-05 12:51:45.796140] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:03.292 12:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.292 12:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:03.292 12:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:03.292 12:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:03.292 12:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:03.292 12:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:03.292 12:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:03.292 12:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:03.292 12:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:03.292 12:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:03.292 12:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:03.292 12:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.292 12:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.292 12:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.292 12:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.292 12:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.292 12:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:03.292 "name": "raid_bdev1", 00:20:03.292 "uuid": "d908d1cb-137d-4b8f-bd6e-25b0351fb405", 00:20:03.292 "strip_size_kb": 0, 00:20:03.292 "state": "online", 00:20:03.292 "raid_level": "raid1", 00:20:03.292 "superblock": true, 00:20:03.292 "num_base_bdevs": 2, 00:20:03.292 "num_base_bdevs_discovered": 2, 00:20:03.292 "num_base_bdevs_operational": 2, 00:20:03.292 "base_bdevs_list": [ 00:20:03.292 { 00:20:03.292 "name": "BaseBdev1", 00:20:03.292 "uuid": "67e112d5-8d0c-5197-8f74-559f413e32c1", 00:20:03.292 "is_configured": true, 00:20:03.292 "data_offset": 2048, 00:20:03.292 "data_size": 63488 00:20:03.292 }, 00:20:03.292 { 00:20:03.292 "name": "BaseBdev2", 00:20:03.292 "uuid": "d51b70f0-6c32-51cd-bbe5-e9efd41486ee", 00:20:03.292 "is_configured": true, 00:20:03.292 "data_offset": 2048, 00:20:03.292 "data_size": 63488 00:20:03.292 } 00:20:03.292 ] 00:20:03.292 }' 00:20:03.292 12:51:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:03.292 12:51:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.553 12:51:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:20:03.553 12:51:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:20:03.813 [2024-12-05 12:51:46.186235] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:20:04.756 12:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:20:04.756 12:51:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.756 12:51:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.756 12:51:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.756 12:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:20:04.756 12:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:20:04.756 12:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:20:04.756 12:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:20:04.756 12:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:04.756 12:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:04.756 12:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:04.756 12:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:04.756 12:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:04.756 12:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:04.756 12:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:04.756 12:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:04.756 12:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:04.756 12:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:04.756 12:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.756 12:51:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.757 12:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.757 12:51:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.757 12:51:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.757 12:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:04.757 "name": "raid_bdev1", 00:20:04.757 "uuid": "d908d1cb-137d-4b8f-bd6e-25b0351fb405", 00:20:04.757 "strip_size_kb": 0, 00:20:04.757 "state": "online", 00:20:04.757 "raid_level": "raid1", 00:20:04.757 "superblock": true, 00:20:04.757 "num_base_bdevs": 2, 00:20:04.757 "num_base_bdevs_discovered": 2, 00:20:04.757 "num_base_bdevs_operational": 2, 00:20:04.757 "base_bdevs_list": [ 00:20:04.757 { 00:20:04.757 "name": "BaseBdev1", 00:20:04.757 "uuid": "67e112d5-8d0c-5197-8f74-559f413e32c1", 00:20:04.757 "is_configured": true, 00:20:04.757 "data_offset": 2048, 00:20:04.757 "data_size": 63488 00:20:04.757 }, 00:20:04.757 { 00:20:04.757 "name": "BaseBdev2", 00:20:04.757 "uuid": "d51b70f0-6c32-51cd-bbe5-e9efd41486ee", 00:20:04.757 "is_configured": true, 00:20:04.757 "data_offset": 2048, 00:20:04.757 "data_size": 63488 00:20:04.757 } 00:20:04.757 ] 00:20:04.757 }' 00:20:04.757 12:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:04.757 12:51:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.017 12:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:05.017 12:51:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.017 12:51:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.017 [2024-12-05 12:51:47.425740] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:05.017 [2024-12-05 12:51:47.425892] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:05.017 [2024-12-05 12:51:47.428991] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:05.017 [2024-12-05 12:51:47.429120] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:05.017 [2024-12-05 12:51:47.429229] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:05.017 [2024-12-05 12:51:47.429395] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:05.017 { 00:20:05.017 "results": [ 00:20:05.017 { 00:20:05.017 "job": "raid_bdev1", 00:20:05.017 "core_mask": "0x1", 00:20:05.017 "workload": "randrw", 00:20:05.017 "percentage": 50, 00:20:05.017 "status": "finished", 00:20:05.017 "queue_depth": 1, 00:20:05.017 "io_size": 131072, 00:20:05.017 "runtime": 1.23707, 00:20:05.017 "iops": 17553.574171227174, 00:20:05.017 "mibps": 2194.1967714033967, 00:20:05.017 "io_failed": 0, 00:20:05.017 "io_timeout": 0, 00:20:05.017 "avg_latency_us": 53.604656122141726, 00:20:05.017 "min_latency_us": 30.523076923076925, 00:20:05.017 "max_latency_us": 1676.2092307692308 00:20:05.017 } 00:20:05.017 ], 00:20:05.017 "core_count": 1 00:20:05.017 } 00:20:05.017 12:51:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.017 12:51:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61999 00:20:05.017 12:51:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61999 ']' 00:20:05.017 12:51:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61999 00:20:05.017 12:51:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:20:05.017 12:51:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:05.017 12:51:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61999 00:20:05.017 killing process with pid 61999 00:20:05.017 12:51:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:05.017 12:51:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:05.017 12:51:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61999' 00:20:05.017 12:51:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61999 00:20:05.017 [2024-12-05 12:51:47.457211] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:05.017 12:51:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61999 00:20:05.017 [2024-12-05 12:51:47.540735] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:05.588 12:51:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.j1EYALOq0W 00:20:05.588 12:51:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:20:05.588 12:51:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:20:05.588 12:51:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:20:05.588 12:51:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:20:05.588 12:51:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:05.588 12:51:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:20:05.588 12:51:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:20:05.588 00:20:05.588 real 0m3.454s 00:20:05.588 user 0m4.195s 00:20:05.588 sys 0m0.379s 00:20:05.588 12:51:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:05.588 ************************************ 00:20:05.588 END TEST raid_read_error_test 00:20:05.588 ************************************ 00:20:05.588 12:51:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.849 12:51:48 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:20:05.849 12:51:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:05.849 12:51:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:05.849 12:51:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:05.849 ************************************ 00:20:05.849 START TEST raid_write_error_test 00:20:05.849 ************************************ 00:20:05.849 12:51:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:20:05.849 12:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:20:05.849 12:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:20:05.849 12:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:20:05.849 12:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:20:05.849 12:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:05.849 12:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:20:05.849 12:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:05.849 12:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:05.849 12:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:20:05.849 12:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:05.849 12:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:05.849 12:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:05.849 12:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:20:05.849 12:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:20:05.849 12:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:20:05.849 12:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:20:05.849 12:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:20:05.849 12:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:20:05.849 12:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:20:05.849 12:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:20:05.849 12:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:20:05.849 12:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ctxpl5jKGY 00:20:05.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:05.849 12:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62134 00:20:05.849 12:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62134 00:20:05.849 12:51:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62134 ']' 00:20:05.849 12:51:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.849 12:51:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:05.849 12:51:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.849 12:51:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:05.849 12:51:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:05.849 12:51:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.849 [2024-12-05 12:51:48.278947] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:20:05.849 [2024-12-05 12:51:48.279062] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62134 ] 00:20:06.110 [2024-12-05 12:51:48.433962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.110 [2024-12-05 12:51:48.514179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.110 [2024-12-05 12:51:48.624176] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:06.110 [2024-12-05 12:51:48.624220] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:06.681 12:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:06.681 12:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:20:06.681 12:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:06.681 12:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:06.681 12:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.681 12:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.681 BaseBdev1_malloc 00:20:06.681 12:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.681 12:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:20:06.681 12:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.681 12:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.681 true 00:20:06.681 12:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.681 12:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:06.681 12:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.681 12:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.681 [2024-12-05 12:51:49.150545] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:06.681 [2024-12-05 12:51:49.150592] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:06.681 [2024-12-05 12:51:49.150607] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:20:06.681 [2024-12-05 12:51:49.150616] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:06.681 [2024-12-05 12:51:49.152345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:06.681 [2024-12-05 12:51:49.152480] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:06.681 BaseBdev1 00:20:06.681 12:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.681 12:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:06.681 12:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:06.681 12:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.681 12:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.681 BaseBdev2_malloc 00:20:06.681 12:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.681 12:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:20:06.681 12:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.681 12:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.681 true 00:20:06.682 12:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.682 12:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:06.682 12:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.682 12:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.682 [2024-12-05 12:51:49.189842] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:06.682 [2024-12-05 12:51:49.189879] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:06.682 [2024-12-05 12:51:49.189890] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:20:06.682 [2024-12-05 12:51:49.189898] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:06.682 [2024-12-05 12:51:49.191602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:06.682 [2024-12-05 12:51:49.191629] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:06.682 BaseBdev2 00:20:06.682 12:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.682 12:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:20:06.682 12:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.682 12:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.682 [2024-12-05 12:51:49.197888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:06.682 [2024-12-05 12:51:49.199375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:06.682 [2024-12-05 12:51:49.199634] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:06.682 [2024-12-05 12:51:49.199650] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:06.682 [2024-12-05 12:51:49.199846] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:06.682 [2024-12-05 12:51:49.199967] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:06.682 [2024-12-05 12:51:49.199974] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:06.682 [2024-12-05 12:51:49.200081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:06.682 12:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.682 12:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:06.682 12:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:06.682 12:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:06.682 12:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:06.682 12:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:06.682 12:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:06.682 12:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:06.682 12:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:06.682 12:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:06.682 12:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:06.682 12:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.682 12:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.682 12:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.682 12:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.682 12:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.682 12:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:06.682 "name": "raid_bdev1", 00:20:06.682 "uuid": "831c0bad-4708-4e0d-aea9-0e3d2e37b11c", 00:20:06.682 "strip_size_kb": 0, 00:20:06.682 "state": "online", 00:20:06.682 "raid_level": "raid1", 00:20:06.682 "superblock": true, 00:20:06.682 "num_base_bdevs": 2, 00:20:06.682 "num_base_bdevs_discovered": 2, 00:20:06.682 "num_base_bdevs_operational": 2, 00:20:06.682 "base_bdevs_list": [ 00:20:06.682 { 00:20:06.682 "name": "BaseBdev1", 00:20:06.682 "uuid": "106c37de-5622-523c-ac89-47d079206db4", 00:20:06.682 "is_configured": true, 00:20:06.682 "data_offset": 2048, 00:20:06.682 "data_size": 63488 00:20:06.682 }, 00:20:06.682 { 00:20:06.682 "name": "BaseBdev2", 00:20:06.682 "uuid": "0804a257-a211-5bf0-894f-10f78e206fd8", 00:20:06.682 "is_configured": true, 00:20:06.682 "data_offset": 2048, 00:20:06.682 "data_size": 63488 00:20:06.682 } 00:20:06.682 ] 00:20:06.682 }' 00:20:06.682 12:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:06.682 12:51:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.943 12:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:20:06.943 12:51:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:20:07.203 [2024-12-05 12:51:49.590728] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:20:08.188 12:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:20:08.188 12:51:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.188 12:51:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.188 [2024-12-05 12:51:50.511479] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:20:08.189 [2024-12-05 12:51:50.511553] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:08.189 [2024-12-05 12:51:50.511731] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:20:08.189 12:51:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.189 12:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:20:08.189 12:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:20:08.189 12:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:20:08.189 12:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:20:08.189 12:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:08.189 12:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:08.189 12:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:08.189 12:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:08.189 12:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:08.189 12:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:08.189 12:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.189 12:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.189 12:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.189 12:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.189 12:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.189 12:51:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.189 12:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.189 12:51:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.189 12:51:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.189 12:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.189 "name": "raid_bdev1", 00:20:08.189 "uuid": "831c0bad-4708-4e0d-aea9-0e3d2e37b11c", 00:20:08.189 "strip_size_kb": 0, 00:20:08.189 "state": "online", 00:20:08.189 "raid_level": "raid1", 00:20:08.189 "superblock": true, 00:20:08.189 "num_base_bdevs": 2, 00:20:08.189 "num_base_bdevs_discovered": 1, 00:20:08.189 "num_base_bdevs_operational": 1, 00:20:08.189 "base_bdevs_list": [ 00:20:08.189 { 00:20:08.189 "name": null, 00:20:08.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.189 "is_configured": false, 00:20:08.189 "data_offset": 0, 00:20:08.189 "data_size": 63488 00:20:08.189 }, 00:20:08.189 { 00:20:08.189 "name": "BaseBdev2", 00:20:08.189 "uuid": "0804a257-a211-5bf0-894f-10f78e206fd8", 00:20:08.189 "is_configured": true, 00:20:08.189 "data_offset": 2048, 00:20:08.189 "data_size": 63488 00:20:08.189 } 00:20:08.189 ] 00:20:08.189 }' 00:20:08.189 12:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.189 12:51:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.636 12:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:08.636 12:51:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.636 12:51:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.636 [2024-12-05 12:51:50.828632] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:08.636 [2024-12-05 12:51:50.828745] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:08.636 [2024-12-05 12:51:50.831204] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:08.636 [2024-12-05 12:51:50.831303] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:08.636 [2024-12-05 12:51:50.831411] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:08.636 [2024-12-05 12:51:50.831566] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:08.636 12:51:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.636 { 00:20:08.636 "results": [ 00:20:08.636 { 00:20:08.636 "job": "raid_bdev1", 00:20:08.636 "core_mask": "0x1", 00:20:08.636 "workload": "randrw", 00:20:08.636 "percentage": 50, 00:20:08.636 "status": "finished", 00:20:08.636 "queue_depth": 1, 00:20:08.636 "io_size": 131072, 00:20:08.636 "runtime": 1.236591, 00:20:08.636 "iops": 24262.670519193493, 00:20:08.636 "mibps": 3032.8338148991866, 00:20:08.636 "io_failed": 0, 00:20:08.636 "io_timeout": 0, 00:20:08.636 "avg_latency_us": 38.61881709264971, 00:20:08.636 "min_latency_us": 23.138461538461538, 00:20:08.636 "max_latency_us": 1348.5292307692307 00:20:08.636 } 00:20:08.636 ], 00:20:08.636 "core_count": 1 00:20:08.636 } 00:20:08.636 12:51:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62134 00:20:08.636 12:51:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62134 ']' 00:20:08.636 12:51:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62134 00:20:08.636 12:51:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:20:08.636 12:51:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:08.636 12:51:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62134 00:20:08.636 killing process with pid 62134 00:20:08.636 12:51:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:08.636 12:51:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:08.636 12:51:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62134' 00:20:08.636 12:51:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62134 00:20:08.636 12:51:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62134 00:20:08.636 [2024-12-05 12:51:50.863113] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:08.636 [2024-12-05 12:51:50.931343] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:09.232 12:51:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:20:09.232 12:51:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ctxpl5jKGY 00:20:09.232 12:51:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:20:09.232 12:51:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:20:09.232 12:51:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:20:09.232 12:51:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:09.232 12:51:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:20:09.232 12:51:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:20:09.232 00:20:09.232 real 0m3.352s 00:20:09.232 user 0m4.053s 00:20:09.232 sys 0m0.364s 00:20:09.232 12:51:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:09.232 ************************************ 00:20:09.232 END TEST raid_write_error_test 00:20:09.232 ************************************ 00:20:09.232 12:51:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.232 12:51:51 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:20:09.232 12:51:51 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:20:09.232 12:51:51 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:20:09.232 12:51:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:09.232 12:51:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:09.232 12:51:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:09.232 ************************************ 00:20:09.232 START TEST raid_state_function_test 00:20:09.232 ************************************ 00:20:09.232 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:20:09.232 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:20:09.232 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:20:09.232 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:20:09.232 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:09.232 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:09.232 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:09.232 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:09.232 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:09.232 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:09.232 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:09.232 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:09.232 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:09.232 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:20:09.232 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:09.232 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:09.232 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:09.232 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:09.232 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:09.232 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:09.232 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:09.232 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:09.232 Process raid pid: 62261 00:20:09.232 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:20:09.232 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:20:09.232 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:20:09.232 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:20:09.232 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:20:09.232 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62261 00:20:09.232 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62261' 00:20:09.232 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62261 00:20:09.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.232 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62261 ']' 00:20:09.232 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.232 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:09.232 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.232 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:09.232 12:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:09.232 12:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.232 [2024-12-05 12:51:51.661928] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:20:09.232 [2024-12-05 12:51:51.662026] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.232 [2024-12-05 12:51:51.812625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.494 [2024-12-05 12:51:51.899199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.494 [2024-12-05 12:51:52.012088] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:09.494 [2024-12-05 12:51:52.012119] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:10.064 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:10.064 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:20:10.064 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:10.064 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.064 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.064 [2024-12-05 12:51:52.597779] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:10.064 [2024-12-05 12:51:52.597833] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:10.064 [2024-12-05 12:51:52.597846] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:10.064 [2024-12-05 12:51:52.597854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:10.064 [2024-12-05 12:51:52.597859] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:10.064 [2024-12-05 12:51:52.597866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:10.064 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.064 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:10.064 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:10.064 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:10.064 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:10.064 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:10.064 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:10.064 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:10.064 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:10.064 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:10.064 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:10.064 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.064 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:10.064 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.064 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.064 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.064 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:10.064 "name": "Existed_Raid", 00:20:10.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.064 "strip_size_kb": 64, 00:20:10.064 "state": "configuring", 00:20:10.064 "raid_level": "raid0", 00:20:10.064 "superblock": false, 00:20:10.064 "num_base_bdevs": 3, 00:20:10.064 "num_base_bdevs_discovered": 0, 00:20:10.064 "num_base_bdevs_operational": 3, 00:20:10.064 "base_bdevs_list": [ 00:20:10.064 { 00:20:10.064 "name": "BaseBdev1", 00:20:10.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.064 "is_configured": false, 00:20:10.064 "data_offset": 0, 00:20:10.064 "data_size": 0 00:20:10.064 }, 00:20:10.064 { 00:20:10.064 "name": "BaseBdev2", 00:20:10.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.064 "is_configured": false, 00:20:10.064 "data_offset": 0, 00:20:10.064 "data_size": 0 00:20:10.064 }, 00:20:10.064 { 00:20:10.064 "name": "BaseBdev3", 00:20:10.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.064 "is_configured": false, 00:20:10.064 "data_offset": 0, 00:20:10.064 "data_size": 0 00:20:10.064 } 00:20:10.064 ] 00:20:10.064 }' 00:20:10.064 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:10.064 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.330 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:10.330 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.330 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.330 [2024-12-05 12:51:52.909799] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:10.330 [2024-12-05 12:51:52.909829] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:10.591 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.591 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:10.591 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.591 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.591 [2024-12-05 12:51:52.917799] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:10.591 [2024-12-05 12:51:52.917836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:10.591 [2024-12-05 12:51:52.917843] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:10.591 [2024-12-05 12:51:52.917852] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:10.591 [2024-12-05 12:51:52.917857] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:10.591 [2024-12-05 12:51:52.917865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:10.591 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.591 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:10.591 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.591 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.591 [2024-12-05 12:51:52.945607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:10.591 BaseBdev1 00:20:10.591 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.591 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:10.591 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:10.591 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:10.591 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:10.591 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:10.591 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:10.591 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:10.591 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.591 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.591 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.591 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:10.591 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.591 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.591 [ 00:20:10.591 { 00:20:10.591 "name": "BaseBdev1", 00:20:10.591 "aliases": [ 00:20:10.591 "ec57fe59-13b8-46b1-a3ef-54777cee6506" 00:20:10.591 ], 00:20:10.591 "product_name": "Malloc disk", 00:20:10.591 "block_size": 512, 00:20:10.591 "num_blocks": 65536, 00:20:10.591 "uuid": "ec57fe59-13b8-46b1-a3ef-54777cee6506", 00:20:10.591 "assigned_rate_limits": { 00:20:10.591 "rw_ios_per_sec": 0, 00:20:10.591 "rw_mbytes_per_sec": 0, 00:20:10.591 "r_mbytes_per_sec": 0, 00:20:10.591 "w_mbytes_per_sec": 0 00:20:10.591 }, 00:20:10.591 "claimed": true, 00:20:10.591 "claim_type": "exclusive_write", 00:20:10.591 "zoned": false, 00:20:10.591 "supported_io_types": { 00:20:10.591 "read": true, 00:20:10.591 "write": true, 00:20:10.591 "unmap": true, 00:20:10.591 "flush": true, 00:20:10.591 "reset": true, 00:20:10.591 "nvme_admin": false, 00:20:10.591 "nvme_io": false, 00:20:10.591 "nvme_io_md": false, 00:20:10.591 "write_zeroes": true, 00:20:10.591 "zcopy": true, 00:20:10.591 "get_zone_info": false, 00:20:10.591 "zone_management": false, 00:20:10.591 "zone_append": false, 00:20:10.591 "compare": false, 00:20:10.591 "compare_and_write": false, 00:20:10.591 "abort": true, 00:20:10.591 "seek_hole": false, 00:20:10.591 "seek_data": false, 00:20:10.591 "copy": true, 00:20:10.591 "nvme_iov_md": false 00:20:10.591 }, 00:20:10.591 "memory_domains": [ 00:20:10.591 { 00:20:10.591 "dma_device_id": "system", 00:20:10.591 "dma_device_type": 1 00:20:10.591 }, 00:20:10.591 { 00:20:10.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.591 "dma_device_type": 2 00:20:10.591 } 00:20:10.591 ], 00:20:10.591 "driver_specific": {} 00:20:10.591 } 00:20:10.591 ] 00:20:10.591 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.591 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:10.591 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:10.592 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:10.592 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:10.592 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:10.592 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:10.592 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:10.592 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:10.592 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:10.592 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:10.592 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:10.592 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.592 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.592 12:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:10.592 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.592 12:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.592 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:10.592 "name": "Existed_Raid", 00:20:10.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.592 "strip_size_kb": 64, 00:20:10.592 "state": "configuring", 00:20:10.592 "raid_level": "raid0", 00:20:10.592 "superblock": false, 00:20:10.592 "num_base_bdevs": 3, 00:20:10.592 "num_base_bdevs_discovered": 1, 00:20:10.592 "num_base_bdevs_operational": 3, 00:20:10.592 "base_bdevs_list": [ 00:20:10.592 { 00:20:10.592 "name": "BaseBdev1", 00:20:10.592 "uuid": "ec57fe59-13b8-46b1-a3ef-54777cee6506", 00:20:10.592 "is_configured": true, 00:20:10.592 "data_offset": 0, 00:20:10.592 "data_size": 65536 00:20:10.592 }, 00:20:10.592 { 00:20:10.592 "name": "BaseBdev2", 00:20:10.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.592 "is_configured": false, 00:20:10.592 "data_offset": 0, 00:20:10.592 "data_size": 0 00:20:10.592 }, 00:20:10.592 { 00:20:10.592 "name": "BaseBdev3", 00:20:10.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.592 "is_configured": false, 00:20:10.592 "data_offset": 0, 00:20:10.592 "data_size": 0 00:20:10.592 } 00:20:10.592 ] 00:20:10.592 }' 00:20:10.592 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:10.592 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.852 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:10.852 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.852 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.852 [2024-12-05 12:51:53.297715] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:10.852 [2024-12-05 12:51:53.297754] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:10.852 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.852 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:10.852 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.852 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.852 [2024-12-05 12:51:53.305756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:10.852 [2024-12-05 12:51:53.307310] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:10.852 [2024-12-05 12:51:53.307344] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:10.852 [2024-12-05 12:51:53.307352] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:10.852 [2024-12-05 12:51:53.307359] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:10.853 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.853 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:10.853 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:10.853 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:10.853 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:10.853 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:10.853 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:10.853 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:10.853 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:10.853 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:10.853 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:10.853 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:10.853 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:10.853 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:10.853 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.853 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.853 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.853 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.853 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:10.853 "name": "Existed_Raid", 00:20:10.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.853 "strip_size_kb": 64, 00:20:10.853 "state": "configuring", 00:20:10.853 "raid_level": "raid0", 00:20:10.853 "superblock": false, 00:20:10.853 "num_base_bdevs": 3, 00:20:10.853 "num_base_bdevs_discovered": 1, 00:20:10.853 "num_base_bdevs_operational": 3, 00:20:10.853 "base_bdevs_list": [ 00:20:10.853 { 00:20:10.853 "name": "BaseBdev1", 00:20:10.853 "uuid": "ec57fe59-13b8-46b1-a3ef-54777cee6506", 00:20:10.853 "is_configured": true, 00:20:10.853 "data_offset": 0, 00:20:10.853 "data_size": 65536 00:20:10.853 }, 00:20:10.853 { 00:20:10.853 "name": "BaseBdev2", 00:20:10.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.853 "is_configured": false, 00:20:10.853 "data_offset": 0, 00:20:10.853 "data_size": 0 00:20:10.853 }, 00:20:10.853 { 00:20:10.853 "name": "BaseBdev3", 00:20:10.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.853 "is_configured": false, 00:20:10.853 "data_offset": 0, 00:20:10.853 "data_size": 0 00:20:10.853 } 00:20:10.853 ] 00:20:10.853 }' 00:20:10.853 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:10.853 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.113 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:11.113 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.113 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.113 [2024-12-05 12:51:53.624702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:11.113 BaseBdev2 00:20:11.113 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.113 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:11.113 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:11.113 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:11.113 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:11.113 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:11.113 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:11.113 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:11.113 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.113 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.113 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.113 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:11.113 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.113 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.113 [ 00:20:11.113 { 00:20:11.113 "name": "BaseBdev2", 00:20:11.113 "aliases": [ 00:20:11.113 "e632c3bb-6a89-402c-8347-b499da8fb207" 00:20:11.113 ], 00:20:11.113 "product_name": "Malloc disk", 00:20:11.113 "block_size": 512, 00:20:11.113 "num_blocks": 65536, 00:20:11.114 "uuid": "e632c3bb-6a89-402c-8347-b499da8fb207", 00:20:11.114 "assigned_rate_limits": { 00:20:11.114 "rw_ios_per_sec": 0, 00:20:11.114 "rw_mbytes_per_sec": 0, 00:20:11.114 "r_mbytes_per_sec": 0, 00:20:11.114 "w_mbytes_per_sec": 0 00:20:11.114 }, 00:20:11.114 "claimed": true, 00:20:11.114 "claim_type": "exclusive_write", 00:20:11.114 "zoned": false, 00:20:11.114 "supported_io_types": { 00:20:11.114 "read": true, 00:20:11.114 "write": true, 00:20:11.114 "unmap": true, 00:20:11.114 "flush": true, 00:20:11.114 "reset": true, 00:20:11.114 "nvme_admin": false, 00:20:11.114 "nvme_io": false, 00:20:11.114 "nvme_io_md": false, 00:20:11.114 "write_zeroes": true, 00:20:11.114 "zcopy": true, 00:20:11.114 "get_zone_info": false, 00:20:11.114 "zone_management": false, 00:20:11.114 "zone_append": false, 00:20:11.114 "compare": false, 00:20:11.114 "compare_and_write": false, 00:20:11.114 "abort": true, 00:20:11.114 "seek_hole": false, 00:20:11.114 "seek_data": false, 00:20:11.114 "copy": true, 00:20:11.114 "nvme_iov_md": false 00:20:11.114 }, 00:20:11.114 "memory_domains": [ 00:20:11.114 { 00:20:11.114 "dma_device_id": "system", 00:20:11.114 "dma_device_type": 1 00:20:11.114 }, 00:20:11.114 { 00:20:11.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.114 "dma_device_type": 2 00:20:11.114 } 00:20:11.114 ], 00:20:11.114 "driver_specific": {} 00:20:11.114 } 00:20:11.114 ] 00:20:11.114 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.114 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:11.114 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:11.114 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:11.114 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:11.114 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:11.114 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:11.114 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:11.114 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:11.114 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:11.114 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.114 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.114 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.114 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.114 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.114 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:11.114 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.114 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.114 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.114 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.114 "name": "Existed_Raid", 00:20:11.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.114 "strip_size_kb": 64, 00:20:11.114 "state": "configuring", 00:20:11.114 "raid_level": "raid0", 00:20:11.114 "superblock": false, 00:20:11.114 "num_base_bdevs": 3, 00:20:11.114 "num_base_bdevs_discovered": 2, 00:20:11.114 "num_base_bdevs_operational": 3, 00:20:11.114 "base_bdevs_list": [ 00:20:11.114 { 00:20:11.114 "name": "BaseBdev1", 00:20:11.114 "uuid": "ec57fe59-13b8-46b1-a3ef-54777cee6506", 00:20:11.114 "is_configured": true, 00:20:11.114 "data_offset": 0, 00:20:11.114 "data_size": 65536 00:20:11.114 }, 00:20:11.114 { 00:20:11.114 "name": "BaseBdev2", 00:20:11.114 "uuid": "e632c3bb-6a89-402c-8347-b499da8fb207", 00:20:11.114 "is_configured": true, 00:20:11.114 "data_offset": 0, 00:20:11.114 "data_size": 65536 00:20:11.114 }, 00:20:11.114 { 00:20:11.114 "name": "BaseBdev3", 00:20:11.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.114 "is_configured": false, 00:20:11.114 "data_offset": 0, 00:20:11.114 "data_size": 0 00:20:11.114 } 00:20:11.114 ] 00:20:11.114 }' 00:20:11.114 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.114 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.375 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:11.375 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.375 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.635 [2024-12-05 12:51:53.981929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:11.636 [2024-12-05 12:51:53.981969] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:11.636 [2024-12-05 12:51:53.981981] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:11.636 [2024-12-05 12:51:53.982198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:11.636 [2024-12-05 12:51:53.982320] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:11.636 [2024-12-05 12:51:53.982327] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:11.636 [2024-12-05 12:51:53.982555] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:11.636 BaseBdev3 00:20:11.636 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.636 12:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:20:11.636 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:11.636 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:11.636 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:11.636 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:11.636 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:11.636 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:11.636 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.636 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.636 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.636 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:11.636 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.636 12:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.636 [ 00:20:11.636 { 00:20:11.636 "name": "BaseBdev3", 00:20:11.636 "aliases": [ 00:20:11.636 "b806b6ac-ab6e-4bc9-bd3f-b3a17ebf62a4" 00:20:11.636 ], 00:20:11.636 "product_name": "Malloc disk", 00:20:11.636 "block_size": 512, 00:20:11.636 "num_blocks": 65536, 00:20:11.636 "uuid": "b806b6ac-ab6e-4bc9-bd3f-b3a17ebf62a4", 00:20:11.636 "assigned_rate_limits": { 00:20:11.636 "rw_ios_per_sec": 0, 00:20:11.636 "rw_mbytes_per_sec": 0, 00:20:11.636 "r_mbytes_per_sec": 0, 00:20:11.636 "w_mbytes_per_sec": 0 00:20:11.636 }, 00:20:11.636 "claimed": true, 00:20:11.636 "claim_type": "exclusive_write", 00:20:11.636 "zoned": false, 00:20:11.636 "supported_io_types": { 00:20:11.636 "read": true, 00:20:11.636 "write": true, 00:20:11.636 "unmap": true, 00:20:11.636 "flush": true, 00:20:11.636 "reset": true, 00:20:11.636 "nvme_admin": false, 00:20:11.636 "nvme_io": false, 00:20:11.636 "nvme_io_md": false, 00:20:11.636 "write_zeroes": true, 00:20:11.636 "zcopy": true, 00:20:11.636 "get_zone_info": false, 00:20:11.636 "zone_management": false, 00:20:11.636 "zone_append": false, 00:20:11.636 "compare": false, 00:20:11.636 "compare_and_write": false, 00:20:11.636 "abort": true, 00:20:11.636 "seek_hole": false, 00:20:11.636 "seek_data": false, 00:20:11.636 "copy": true, 00:20:11.636 "nvme_iov_md": false 00:20:11.636 }, 00:20:11.636 "memory_domains": [ 00:20:11.636 { 00:20:11.636 "dma_device_id": "system", 00:20:11.636 "dma_device_type": 1 00:20:11.636 }, 00:20:11.636 { 00:20:11.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.636 "dma_device_type": 2 00:20:11.636 } 00:20:11.636 ], 00:20:11.636 "driver_specific": {} 00:20:11.636 } 00:20:11.636 ] 00:20:11.636 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.636 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:11.636 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:11.636 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:11.636 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:20:11.636 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:11.636 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:11.636 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:11.636 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:11.636 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:11.636 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.636 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.636 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.636 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.636 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.636 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:11.636 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.636 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.636 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.636 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.636 "name": "Existed_Raid", 00:20:11.636 "uuid": "2d6f20be-5b7b-4d1f-8c6e-8ad517156949", 00:20:11.636 "strip_size_kb": 64, 00:20:11.636 "state": "online", 00:20:11.636 "raid_level": "raid0", 00:20:11.636 "superblock": false, 00:20:11.636 "num_base_bdevs": 3, 00:20:11.636 "num_base_bdevs_discovered": 3, 00:20:11.636 "num_base_bdevs_operational": 3, 00:20:11.636 "base_bdevs_list": [ 00:20:11.636 { 00:20:11.636 "name": "BaseBdev1", 00:20:11.636 "uuid": "ec57fe59-13b8-46b1-a3ef-54777cee6506", 00:20:11.636 "is_configured": true, 00:20:11.636 "data_offset": 0, 00:20:11.636 "data_size": 65536 00:20:11.636 }, 00:20:11.636 { 00:20:11.636 "name": "BaseBdev2", 00:20:11.636 "uuid": "e632c3bb-6a89-402c-8347-b499da8fb207", 00:20:11.636 "is_configured": true, 00:20:11.636 "data_offset": 0, 00:20:11.636 "data_size": 65536 00:20:11.636 }, 00:20:11.636 { 00:20:11.636 "name": "BaseBdev3", 00:20:11.636 "uuid": "b806b6ac-ab6e-4bc9-bd3f-b3a17ebf62a4", 00:20:11.636 "is_configured": true, 00:20:11.636 "data_offset": 0, 00:20:11.636 "data_size": 65536 00:20:11.636 } 00:20:11.636 ] 00:20:11.636 }' 00:20:11.636 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.636 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.897 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:11.897 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:11.897 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:11.897 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:11.897 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:11.897 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:11.897 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:11.897 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.897 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:11.897 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.897 [2024-12-05 12:51:54.306314] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:11.897 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.897 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:11.897 "name": "Existed_Raid", 00:20:11.897 "aliases": [ 00:20:11.897 "2d6f20be-5b7b-4d1f-8c6e-8ad517156949" 00:20:11.897 ], 00:20:11.897 "product_name": "Raid Volume", 00:20:11.897 "block_size": 512, 00:20:11.897 "num_blocks": 196608, 00:20:11.897 "uuid": "2d6f20be-5b7b-4d1f-8c6e-8ad517156949", 00:20:11.897 "assigned_rate_limits": { 00:20:11.897 "rw_ios_per_sec": 0, 00:20:11.897 "rw_mbytes_per_sec": 0, 00:20:11.897 "r_mbytes_per_sec": 0, 00:20:11.897 "w_mbytes_per_sec": 0 00:20:11.897 }, 00:20:11.897 "claimed": false, 00:20:11.897 "zoned": false, 00:20:11.897 "supported_io_types": { 00:20:11.897 "read": true, 00:20:11.897 "write": true, 00:20:11.897 "unmap": true, 00:20:11.897 "flush": true, 00:20:11.897 "reset": true, 00:20:11.897 "nvme_admin": false, 00:20:11.897 "nvme_io": false, 00:20:11.897 "nvme_io_md": false, 00:20:11.897 "write_zeroes": true, 00:20:11.897 "zcopy": false, 00:20:11.897 "get_zone_info": false, 00:20:11.897 "zone_management": false, 00:20:11.897 "zone_append": false, 00:20:11.897 "compare": false, 00:20:11.897 "compare_and_write": false, 00:20:11.897 "abort": false, 00:20:11.898 "seek_hole": false, 00:20:11.898 "seek_data": false, 00:20:11.898 "copy": false, 00:20:11.898 "nvme_iov_md": false 00:20:11.898 }, 00:20:11.898 "memory_domains": [ 00:20:11.898 { 00:20:11.898 "dma_device_id": "system", 00:20:11.898 "dma_device_type": 1 00:20:11.898 }, 00:20:11.898 { 00:20:11.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.898 "dma_device_type": 2 00:20:11.898 }, 00:20:11.898 { 00:20:11.898 "dma_device_id": "system", 00:20:11.898 "dma_device_type": 1 00:20:11.898 }, 00:20:11.898 { 00:20:11.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.898 "dma_device_type": 2 00:20:11.898 }, 00:20:11.898 { 00:20:11.898 "dma_device_id": "system", 00:20:11.898 "dma_device_type": 1 00:20:11.898 }, 00:20:11.898 { 00:20:11.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.898 "dma_device_type": 2 00:20:11.898 } 00:20:11.898 ], 00:20:11.898 "driver_specific": { 00:20:11.898 "raid": { 00:20:11.898 "uuid": "2d6f20be-5b7b-4d1f-8c6e-8ad517156949", 00:20:11.898 "strip_size_kb": 64, 00:20:11.898 "state": "online", 00:20:11.898 "raid_level": "raid0", 00:20:11.898 "superblock": false, 00:20:11.898 "num_base_bdevs": 3, 00:20:11.898 "num_base_bdevs_discovered": 3, 00:20:11.898 "num_base_bdevs_operational": 3, 00:20:11.898 "base_bdevs_list": [ 00:20:11.898 { 00:20:11.898 "name": "BaseBdev1", 00:20:11.898 "uuid": "ec57fe59-13b8-46b1-a3ef-54777cee6506", 00:20:11.898 "is_configured": true, 00:20:11.898 "data_offset": 0, 00:20:11.898 "data_size": 65536 00:20:11.898 }, 00:20:11.898 { 00:20:11.898 "name": "BaseBdev2", 00:20:11.898 "uuid": "e632c3bb-6a89-402c-8347-b499da8fb207", 00:20:11.898 "is_configured": true, 00:20:11.898 "data_offset": 0, 00:20:11.898 "data_size": 65536 00:20:11.898 }, 00:20:11.898 { 00:20:11.898 "name": "BaseBdev3", 00:20:11.898 "uuid": "b806b6ac-ab6e-4bc9-bd3f-b3a17ebf62a4", 00:20:11.898 "is_configured": true, 00:20:11.898 "data_offset": 0, 00:20:11.898 "data_size": 65536 00:20:11.898 } 00:20:11.898 ] 00:20:11.898 } 00:20:11.898 } 00:20:11.898 }' 00:20:11.898 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:11.898 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:11.898 BaseBdev2 00:20:11.898 BaseBdev3' 00:20:11.898 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:11.898 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:11.898 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:11.898 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:11.898 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.898 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.898 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:11.898 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.898 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:11.898 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:11.898 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:11.898 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:11.898 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:11.898 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.898 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.898 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.898 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:11.898 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:11.898 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:11.898 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:11.898 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:11.898 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.898 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.898 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.159 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:12.159 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:12.159 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:12.159 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.159 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.159 [2024-12-05 12:51:54.498127] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:12.159 [2024-12-05 12:51:54.498152] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:12.159 [2024-12-05 12:51:54.498194] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:12.159 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.159 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:12.159 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:20:12.159 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:12.159 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:20:12.159 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:20:12.159 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:20:12.159 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:12.159 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:20:12.159 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:12.159 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:12.159 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:12.159 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:12.159 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:12.159 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:12.159 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:12.159 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.159 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:12.159 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.159 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.159 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.159 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:12.159 "name": "Existed_Raid", 00:20:12.159 "uuid": "2d6f20be-5b7b-4d1f-8c6e-8ad517156949", 00:20:12.159 "strip_size_kb": 64, 00:20:12.159 "state": "offline", 00:20:12.159 "raid_level": "raid0", 00:20:12.159 "superblock": false, 00:20:12.159 "num_base_bdevs": 3, 00:20:12.159 "num_base_bdevs_discovered": 2, 00:20:12.159 "num_base_bdevs_operational": 2, 00:20:12.159 "base_bdevs_list": [ 00:20:12.159 { 00:20:12.159 "name": null, 00:20:12.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.159 "is_configured": false, 00:20:12.159 "data_offset": 0, 00:20:12.159 "data_size": 65536 00:20:12.159 }, 00:20:12.159 { 00:20:12.159 "name": "BaseBdev2", 00:20:12.159 "uuid": "e632c3bb-6a89-402c-8347-b499da8fb207", 00:20:12.159 "is_configured": true, 00:20:12.159 "data_offset": 0, 00:20:12.159 "data_size": 65536 00:20:12.159 }, 00:20:12.159 { 00:20:12.159 "name": "BaseBdev3", 00:20:12.159 "uuid": "b806b6ac-ab6e-4bc9-bd3f-b3a17ebf62a4", 00:20:12.159 "is_configured": true, 00:20:12.159 "data_offset": 0, 00:20:12.159 "data_size": 65536 00:20:12.159 } 00:20:12.159 ] 00:20:12.159 }' 00:20:12.159 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:12.159 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.421 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:12.421 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:12.421 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:12.421 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.421 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.421 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.421 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.421 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:12.421 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:12.421 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:12.421 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.421 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.421 [2024-12-05 12:51:54.897544] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:12.421 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.421 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:12.421 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:12.421 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:12.421 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.421 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.421 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.421 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.421 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:12.421 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:12.421 12:51:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:20:12.421 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.421 12:51:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.421 [2024-12-05 12:51:54.992825] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:12.421 [2024-12-05 12:51:54.992868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.683 BaseBdev2 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.683 [ 00:20:12.683 { 00:20:12.683 "name": "BaseBdev2", 00:20:12.683 "aliases": [ 00:20:12.683 "58652880-1814-47a5-b001-f1386340b2c9" 00:20:12.683 ], 00:20:12.683 "product_name": "Malloc disk", 00:20:12.683 "block_size": 512, 00:20:12.683 "num_blocks": 65536, 00:20:12.683 "uuid": "58652880-1814-47a5-b001-f1386340b2c9", 00:20:12.683 "assigned_rate_limits": { 00:20:12.683 "rw_ios_per_sec": 0, 00:20:12.683 "rw_mbytes_per_sec": 0, 00:20:12.683 "r_mbytes_per_sec": 0, 00:20:12.683 "w_mbytes_per_sec": 0 00:20:12.683 }, 00:20:12.683 "claimed": false, 00:20:12.683 "zoned": false, 00:20:12.683 "supported_io_types": { 00:20:12.683 "read": true, 00:20:12.683 "write": true, 00:20:12.683 "unmap": true, 00:20:12.683 "flush": true, 00:20:12.683 "reset": true, 00:20:12.683 "nvme_admin": false, 00:20:12.683 "nvme_io": false, 00:20:12.683 "nvme_io_md": false, 00:20:12.683 "write_zeroes": true, 00:20:12.683 "zcopy": true, 00:20:12.683 "get_zone_info": false, 00:20:12.683 "zone_management": false, 00:20:12.683 "zone_append": false, 00:20:12.683 "compare": false, 00:20:12.683 "compare_and_write": false, 00:20:12.683 "abort": true, 00:20:12.683 "seek_hole": false, 00:20:12.683 "seek_data": false, 00:20:12.683 "copy": true, 00:20:12.683 "nvme_iov_md": false 00:20:12.683 }, 00:20:12.683 "memory_domains": [ 00:20:12.683 { 00:20:12.683 "dma_device_id": "system", 00:20:12.683 "dma_device_type": 1 00:20:12.683 }, 00:20:12.683 { 00:20:12.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:12.683 "dma_device_type": 2 00:20:12.683 } 00:20:12.683 ], 00:20:12.683 "driver_specific": {} 00:20:12.683 } 00:20:12.683 ] 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.683 BaseBdev3 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.683 [ 00:20:12.683 { 00:20:12.683 "name": "BaseBdev3", 00:20:12.683 "aliases": [ 00:20:12.683 "5f171989-f01e-47ab-9e8e-c05db7653cf6" 00:20:12.683 ], 00:20:12.683 "product_name": "Malloc disk", 00:20:12.683 "block_size": 512, 00:20:12.683 "num_blocks": 65536, 00:20:12.683 "uuid": "5f171989-f01e-47ab-9e8e-c05db7653cf6", 00:20:12.683 "assigned_rate_limits": { 00:20:12.683 "rw_ios_per_sec": 0, 00:20:12.683 "rw_mbytes_per_sec": 0, 00:20:12.683 "r_mbytes_per_sec": 0, 00:20:12.683 "w_mbytes_per_sec": 0 00:20:12.683 }, 00:20:12.683 "claimed": false, 00:20:12.683 "zoned": false, 00:20:12.683 "supported_io_types": { 00:20:12.683 "read": true, 00:20:12.683 "write": true, 00:20:12.683 "unmap": true, 00:20:12.683 "flush": true, 00:20:12.683 "reset": true, 00:20:12.683 "nvme_admin": false, 00:20:12.683 "nvme_io": false, 00:20:12.683 "nvme_io_md": false, 00:20:12.683 "write_zeroes": true, 00:20:12.683 "zcopy": true, 00:20:12.683 "get_zone_info": false, 00:20:12.683 "zone_management": false, 00:20:12.683 "zone_append": false, 00:20:12.683 "compare": false, 00:20:12.683 "compare_and_write": false, 00:20:12.683 "abort": true, 00:20:12.683 "seek_hole": false, 00:20:12.683 "seek_data": false, 00:20:12.683 "copy": true, 00:20:12.683 "nvme_iov_md": false 00:20:12.683 }, 00:20:12.683 "memory_domains": [ 00:20:12.683 { 00:20:12.683 "dma_device_id": "system", 00:20:12.683 "dma_device_type": 1 00:20:12.683 }, 00:20:12.683 { 00:20:12.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:12.683 "dma_device_type": 2 00:20:12.683 } 00:20:12.683 ], 00:20:12.683 "driver_specific": {} 00:20:12.683 } 00:20:12.683 ] 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.683 [2024-12-05 12:51:55.171987] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:12.683 [2024-12-05 12:51:55.172033] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:12.683 [2024-12-05 12:51:55.172054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:12.683 [2024-12-05 12:51:55.173635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:12.683 "name": "Existed_Raid", 00:20:12.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.683 "strip_size_kb": 64, 00:20:12.683 "state": "configuring", 00:20:12.683 "raid_level": "raid0", 00:20:12.683 "superblock": false, 00:20:12.683 "num_base_bdevs": 3, 00:20:12.683 "num_base_bdevs_discovered": 2, 00:20:12.683 "num_base_bdevs_operational": 3, 00:20:12.683 "base_bdevs_list": [ 00:20:12.683 { 00:20:12.683 "name": "BaseBdev1", 00:20:12.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.683 "is_configured": false, 00:20:12.683 "data_offset": 0, 00:20:12.683 "data_size": 0 00:20:12.683 }, 00:20:12.683 { 00:20:12.683 "name": "BaseBdev2", 00:20:12.683 "uuid": "58652880-1814-47a5-b001-f1386340b2c9", 00:20:12.683 "is_configured": true, 00:20:12.683 "data_offset": 0, 00:20:12.683 "data_size": 65536 00:20:12.683 }, 00:20:12.683 { 00:20:12.683 "name": "BaseBdev3", 00:20:12.683 "uuid": "5f171989-f01e-47ab-9e8e-c05db7653cf6", 00:20:12.683 "is_configured": true, 00:20:12.683 "data_offset": 0, 00:20:12.683 "data_size": 65536 00:20:12.683 } 00:20:12.683 ] 00:20:12.683 }' 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:12.683 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.958 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:12.958 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.958 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.958 [2024-12-05 12:51:55.496059] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:12.958 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.958 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:12.958 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:12.958 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:12.958 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:12.958 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:12.958 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:12.958 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:12.958 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:12.958 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:12.958 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:12.958 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.958 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.958 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.959 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:12.959 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.959 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:12.959 "name": "Existed_Raid", 00:20:12.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.959 "strip_size_kb": 64, 00:20:12.959 "state": "configuring", 00:20:12.959 "raid_level": "raid0", 00:20:12.959 "superblock": false, 00:20:12.959 "num_base_bdevs": 3, 00:20:12.959 "num_base_bdevs_discovered": 1, 00:20:12.959 "num_base_bdevs_operational": 3, 00:20:12.959 "base_bdevs_list": [ 00:20:12.959 { 00:20:12.959 "name": "BaseBdev1", 00:20:12.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.959 "is_configured": false, 00:20:12.959 "data_offset": 0, 00:20:12.959 "data_size": 0 00:20:12.959 }, 00:20:12.959 { 00:20:12.959 "name": null, 00:20:12.959 "uuid": "58652880-1814-47a5-b001-f1386340b2c9", 00:20:12.959 "is_configured": false, 00:20:12.959 "data_offset": 0, 00:20:12.959 "data_size": 65536 00:20:12.959 }, 00:20:12.959 { 00:20:12.959 "name": "BaseBdev3", 00:20:12.959 "uuid": "5f171989-f01e-47ab-9e8e-c05db7653cf6", 00:20:12.959 "is_configured": true, 00:20:12.959 "data_offset": 0, 00:20:12.959 "data_size": 65536 00:20:12.959 } 00:20:12.959 ] 00:20:12.959 }' 00:20:12.959 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:12.959 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.528 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.528 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:13.528 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.528 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.528 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.528 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:20:13.528 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:13.528 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.528 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.528 [2024-12-05 12:51:55.870816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:13.528 BaseBdev1 00:20:13.528 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.528 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:20:13.528 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:13.528 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:13.528 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:13.528 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:13.528 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:13.528 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:13.528 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.528 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.528 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.528 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:13.528 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.528 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.528 [ 00:20:13.528 { 00:20:13.529 "name": "BaseBdev1", 00:20:13.529 "aliases": [ 00:20:13.529 "b40dcb9a-705b-42a1-b7e7-7dd78f5baf13" 00:20:13.529 ], 00:20:13.529 "product_name": "Malloc disk", 00:20:13.529 "block_size": 512, 00:20:13.529 "num_blocks": 65536, 00:20:13.529 "uuid": "b40dcb9a-705b-42a1-b7e7-7dd78f5baf13", 00:20:13.529 "assigned_rate_limits": { 00:20:13.529 "rw_ios_per_sec": 0, 00:20:13.529 "rw_mbytes_per_sec": 0, 00:20:13.529 "r_mbytes_per_sec": 0, 00:20:13.529 "w_mbytes_per_sec": 0 00:20:13.529 }, 00:20:13.529 "claimed": true, 00:20:13.529 "claim_type": "exclusive_write", 00:20:13.529 "zoned": false, 00:20:13.529 "supported_io_types": { 00:20:13.529 "read": true, 00:20:13.529 "write": true, 00:20:13.529 "unmap": true, 00:20:13.529 "flush": true, 00:20:13.529 "reset": true, 00:20:13.529 "nvme_admin": false, 00:20:13.529 "nvme_io": false, 00:20:13.529 "nvme_io_md": false, 00:20:13.529 "write_zeroes": true, 00:20:13.529 "zcopy": true, 00:20:13.529 "get_zone_info": false, 00:20:13.529 "zone_management": false, 00:20:13.529 "zone_append": false, 00:20:13.529 "compare": false, 00:20:13.529 "compare_and_write": false, 00:20:13.529 "abort": true, 00:20:13.529 "seek_hole": false, 00:20:13.529 "seek_data": false, 00:20:13.529 "copy": true, 00:20:13.529 "nvme_iov_md": false 00:20:13.529 }, 00:20:13.529 "memory_domains": [ 00:20:13.529 { 00:20:13.529 "dma_device_id": "system", 00:20:13.529 "dma_device_type": 1 00:20:13.529 }, 00:20:13.529 { 00:20:13.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:13.529 "dma_device_type": 2 00:20:13.529 } 00:20:13.529 ], 00:20:13.529 "driver_specific": {} 00:20:13.529 } 00:20:13.529 ] 00:20:13.529 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.529 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:13.529 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:13.529 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:13.529 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:13.529 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:13.529 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:13.529 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:13.529 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:13.529 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:13.529 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:13.529 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:13.529 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.529 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.529 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:13.529 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.529 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.529 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:13.529 "name": "Existed_Raid", 00:20:13.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.529 "strip_size_kb": 64, 00:20:13.529 "state": "configuring", 00:20:13.529 "raid_level": "raid0", 00:20:13.529 "superblock": false, 00:20:13.529 "num_base_bdevs": 3, 00:20:13.529 "num_base_bdevs_discovered": 2, 00:20:13.529 "num_base_bdevs_operational": 3, 00:20:13.529 "base_bdevs_list": [ 00:20:13.529 { 00:20:13.529 "name": "BaseBdev1", 00:20:13.529 "uuid": "b40dcb9a-705b-42a1-b7e7-7dd78f5baf13", 00:20:13.529 "is_configured": true, 00:20:13.529 "data_offset": 0, 00:20:13.529 "data_size": 65536 00:20:13.529 }, 00:20:13.529 { 00:20:13.529 "name": null, 00:20:13.529 "uuid": "58652880-1814-47a5-b001-f1386340b2c9", 00:20:13.529 "is_configured": false, 00:20:13.529 "data_offset": 0, 00:20:13.529 "data_size": 65536 00:20:13.529 }, 00:20:13.529 { 00:20:13.529 "name": "BaseBdev3", 00:20:13.529 "uuid": "5f171989-f01e-47ab-9e8e-c05db7653cf6", 00:20:13.529 "is_configured": true, 00:20:13.529 "data_offset": 0, 00:20:13.529 "data_size": 65536 00:20:13.529 } 00:20:13.529 ] 00:20:13.529 }' 00:20:13.529 12:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:13.529 12:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.789 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.789 12:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.789 12:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.789 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:13.789 12:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.789 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:20:13.789 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:20:13.789 12:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.789 12:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.789 [2024-12-05 12:51:56.226934] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:13.789 12:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.789 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:13.789 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:13.789 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:13.789 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:13.789 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:13.789 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:13.789 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:13.789 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:13.789 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:13.789 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:13.789 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:13.789 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.789 12:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.789 12:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.789 12:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.789 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:13.789 "name": "Existed_Raid", 00:20:13.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.789 "strip_size_kb": 64, 00:20:13.789 "state": "configuring", 00:20:13.789 "raid_level": "raid0", 00:20:13.789 "superblock": false, 00:20:13.789 "num_base_bdevs": 3, 00:20:13.789 "num_base_bdevs_discovered": 1, 00:20:13.789 "num_base_bdevs_operational": 3, 00:20:13.789 "base_bdevs_list": [ 00:20:13.789 { 00:20:13.789 "name": "BaseBdev1", 00:20:13.789 "uuid": "b40dcb9a-705b-42a1-b7e7-7dd78f5baf13", 00:20:13.789 "is_configured": true, 00:20:13.789 "data_offset": 0, 00:20:13.789 "data_size": 65536 00:20:13.789 }, 00:20:13.789 { 00:20:13.789 "name": null, 00:20:13.789 "uuid": "58652880-1814-47a5-b001-f1386340b2c9", 00:20:13.789 "is_configured": false, 00:20:13.789 "data_offset": 0, 00:20:13.789 "data_size": 65536 00:20:13.789 }, 00:20:13.789 { 00:20:13.789 "name": null, 00:20:13.789 "uuid": "5f171989-f01e-47ab-9e8e-c05db7653cf6", 00:20:13.789 "is_configured": false, 00:20:13.789 "data_offset": 0, 00:20:13.789 "data_size": 65536 00:20:13.789 } 00:20:13.789 ] 00:20:13.789 }' 00:20:13.789 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:13.789 12:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.050 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.050 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:14.050 12:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.050 12:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.050 12:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.050 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:20:14.050 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:14.050 12:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.050 12:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.050 [2024-12-05 12:51:56.559035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:14.050 12:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.050 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:14.050 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:14.050 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:14.050 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:14.050 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:14.050 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:14.050 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:14.050 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:14.050 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:14.050 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:14.050 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.050 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:14.050 12:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.050 12:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.050 12:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.050 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:14.050 "name": "Existed_Raid", 00:20:14.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.050 "strip_size_kb": 64, 00:20:14.050 "state": "configuring", 00:20:14.050 "raid_level": "raid0", 00:20:14.050 "superblock": false, 00:20:14.050 "num_base_bdevs": 3, 00:20:14.050 "num_base_bdevs_discovered": 2, 00:20:14.050 "num_base_bdevs_operational": 3, 00:20:14.050 "base_bdevs_list": [ 00:20:14.050 { 00:20:14.050 "name": "BaseBdev1", 00:20:14.050 "uuid": "b40dcb9a-705b-42a1-b7e7-7dd78f5baf13", 00:20:14.050 "is_configured": true, 00:20:14.050 "data_offset": 0, 00:20:14.050 "data_size": 65536 00:20:14.050 }, 00:20:14.050 { 00:20:14.050 "name": null, 00:20:14.050 "uuid": "58652880-1814-47a5-b001-f1386340b2c9", 00:20:14.050 "is_configured": false, 00:20:14.050 "data_offset": 0, 00:20:14.050 "data_size": 65536 00:20:14.050 }, 00:20:14.050 { 00:20:14.050 "name": "BaseBdev3", 00:20:14.050 "uuid": "5f171989-f01e-47ab-9e8e-c05db7653cf6", 00:20:14.050 "is_configured": true, 00:20:14.050 "data_offset": 0, 00:20:14.050 "data_size": 65536 00:20:14.050 } 00:20:14.050 ] 00:20:14.050 }' 00:20:14.050 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:14.050 12:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.315 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.315 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:14.315 12:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.315 12:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.576 12:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.576 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:20:14.576 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:14.576 12:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.576 12:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.576 [2024-12-05 12:51:56.923095] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:14.576 12:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.576 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:14.576 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:14.576 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:14.576 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:14.576 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:14.576 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:14.576 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:14.576 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:14.576 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:14.576 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:14.576 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.576 12:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.576 12:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.576 12:51:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:14.576 12:51:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.576 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:14.576 "name": "Existed_Raid", 00:20:14.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.576 "strip_size_kb": 64, 00:20:14.576 "state": "configuring", 00:20:14.576 "raid_level": "raid0", 00:20:14.576 "superblock": false, 00:20:14.576 "num_base_bdevs": 3, 00:20:14.576 "num_base_bdevs_discovered": 1, 00:20:14.576 "num_base_bdevs_operational": 3, 00:20:14.576 "base_bdevs_list": [ 00:20:14.576 { 00:20:14.576 "name": null, 00:20:14.576 "uuid": "b40dcb9a-705b-42a1-b7e7-7dd78f5baf13", 00:20:14.576 "is_configured": false, 00:20:14.576 "data_offset": 0, 00:20:14.576 "data_size": 65536 00:20:14.576 }, 00:20:14.576 { 00:20:14.576 "name": null, 00:20:14.576 "uuid": "58652880-1814-47a5-b001-f1386340b2c9", 00:20:14.576 "is_configured": false, 00:20:14.576 "data_offset": 0, 00:20:14.576 "data_size": 65536 00:20:14.576 }, 00:20:14.576 { 00:20:14.576 "name": "BaseBdev3", 00:20:14.576 "uuid": "5f171989-f01e-47ab-9e8e-c05db7653cf6", 00:20:14.576 "is_configured": true, 00:20:14.576 "data_offset": 0, 00:20:14.576 "data_size": 65536 00:20:14.576 } 00:20:14.576 ] 00:20:14.576 }' 00:20:14.576 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:14.576 12:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.837 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.837 12:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.837 12:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.837 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:14.837 12:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.837 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:20:14.837 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:14.837 12:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.837 12:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.837 [2024-12-05 12:51:57.309894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:14.837 12:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.837 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:14.837 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:14.837 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:14.837 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:14.837 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:14.837 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:14.837 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:14.837 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:14.837 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:14.837 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:14.837 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:14.837 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.837 12:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.837 12:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.837 12:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.837 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:14.837 "name": "Existed_Raid", 00:20:14.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.837 "strip_size_kb": 64, 00:20:14.837 "state": "configuring", 00:20:14.837 "raid_level": "raid0", 00:20:14.837 "superblock": false, 00:20:14.837 "num_base_bdevs": 3, 00:20:14.837 "num_base_bdevs_discovered": 2, 00:20:14.837 "num_base_bdevs_operational": 3, 00:20:14.837 "base_bdevs_list": [ 00:20:14.837 { 00:20:14.838 "name": null, 00:20:14.838 "uuid": "b40dcb9a-705b-42a1-b7e7-7dd78f5baf13", 00:20:14.838 "is_configured": false, 00:20:14.838 "data_offset": 0, 00:20:14.838 "data_size": 65536 00:20:14.838 }, 00:20:14.838 { 00:20:14.838 "name": "BaseBdev2", 00:20:14.838 "uuid": "58652880-1814-47a5-b001-f1386340b2c9", 00:20:14.838 "is_configured": true, 00:20:14.838 "data_offset": 0, 00:20:14.838 "data_size": 65536 00:20:14.838 }, 00:20:14.838 { 00:20:14.838 "name": "BaseBdev3", 00:20:14.838 "uuid": "5f171989-f01e-47ab-9e8e-c05db7653cf6", 00:20:14.838 "is_configured": true, 00:20:14.838 "data_offset": 0, 00:20:14.838 "data_size": 65536 00:20:14.838 } 00:20:14.838 ] 00:20:14.838 }' 00:20:14.838 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:14.838 12:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.099 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:15.099 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.099 12:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.099 12:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.099 12:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.099 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:15.099 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.099 12:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.099 12:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.099 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:15.099 12:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.099 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b40dcb9a-705b-42a1-b7e7-7dd78f5baf13 00:20:15.099 12:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.099 12:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.360 [2024-12-05 12:51:57.700618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:15.360 [2024-12-05 12:51:57.700659] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:15.360 [2024-12-05 12:51:57.700667] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:15.360 [2024-12-05 12:51:57.700870] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:15.360 [2024-12-05 12:51:57.700986] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:15.360 [2024-12-05 12:51:57.700999] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:20:15.360 [2024-12-05 12:51:57.701187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:15.360 NewBaseBdev 00:20:15.360 12:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.360 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:20:15.360 12:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:20:15.360 12:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:15.360 12:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:15.360 12:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:15.360 12:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:15.360 12:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:15.360 12:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.360 12:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.360 12:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.360 12:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:15.360 12:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.360 12:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.360 [ 00:20:15.360 { 00:20:15.360 "name": "NewBaseBdev", 00:20:15.360 "aliases": [ 00:20:15.360 "b40dcb9a-705b-42a1-b7e7-7dd78f5baf13" 00:20:15.360 ], 00:20:15.360 "product_name": "Malloc disk", 00:20:15.360 "block_size": 512, 00:20:15.360 "num_blocks": 65536, 00:20:15.360 "uuid": "b40dcb9a-705b-42a1-b7e7-7dd78f5baf13", 00:20:15.360 "assigned_rate_limits": { 00:20:15.360 "rw_ios_per_sec": 0, 00:20:15.360 "rw_mbytes_per_sec": 0, 00:20:15.360 "r_mbytes_per_sec": 0, 00:20:15.360 "w_mbytes_per_sec": 0 00:20:15.360 }, 00:20:15.360 "claimed": true, 00:20:15.360 "claim_type": "exclusive_write", 00:20:15.360 "zoned": false, 00:20:15.360 "supported_io_types": { 00:20:15.360 "read": true, 00:20:15.360 "write": true, 00:20:15.360 "unmap": true, 00:20:15.360 "flush": true, 00:20:15.360 "reset": true, 00:20:15.360 "nvme_admin": false, 00:20:15.360 "nvme_io": false, 00:20:15.360 "nvme_io_md": false, 00:20:15.360 "write_zeroes": true, 00:20:15.360 "zcopy": true, 00:20:15.360 "get_zone_info": false, 00:20:15.360 "zone_management": false, 00:20:15.360 "zone_append": false, 00:20:15.360 "compare": false, 00:20:15.360 "compare_and_write": false, 00:20:15.360 "abort": true, 00:20:15.360 "seek_hole": false, 00:20:15.360 "seek_data": false, 00:20:15.360 "copy": true, 00:20:15.360 "nvme_iov_md": false 00:20:15.360 }, 00:20:15.360 "memory_domains": [ 00:20:15.360 { 00:20:15.360 "dma_device_id": "system", 00:20:15.360 "dma_device_type": 1 00:20:15.360 }, 00:20:15.360 { 00:20:15.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:15.361 "dma_device_type": 2 00:20:15.361 } 00:20:15.361 ], 00:20:15.361 "driver_specific": {} 00:20:15.361 } 00:20:15.361 ] 00:20:15.361 12:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.361 12:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:15.361 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:20:15.361 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:15.361 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:15.361 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:15.361 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:15.361 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:15.361 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:15.361 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:15.361 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:15.361 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:15.361 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.361 12:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.361 12:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.361 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:15.361 12:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.361 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:15.361 "name": "Existed_Raid", 00:20:15.361 "uuid": "329984ae-a98f-4f4a-b968-45b93a55c25c", 00:20:15.361 "strip_size_kb": 64, 00:20:15.361 "state": "online", 00:20:15.361 "raid_level": "raid0", 00:20:15.361 "superblock": false, 00:20:15.361 "num_base_bdevs": 3, 00:20:15.361 "num_base_bdevs_discovered": 3, 00:20:15.361 "num_base_bdevs_operational": 3, 00:20:15.361 "base_bdevs_list": [ 00:20:15.361 { 00:20:15.361 "name": "NewBaseBdev", 00:20:15.361 "uuid": "b40dcb9a-705b-42a1-b7e7-7dd78f5baf13", 00:20:15.361 "is_configured": true, 00:20:15.361 "data_offset": 0, 00:20:15.361 "data_size": 65536 00:20:15.361 }, 00:20:15.361 { 00:20:15.361 "name": "BaseBdev2", 00:20:15.361 "uuid": "58652880-1814-47a5-b001-f1386340b2c9", 00:20:15.361 "is_configured": true, 00:20:15.361 "data_offset": 0, 00:20:15.361 "data_size": 65536 00:20:15.361 }, 00:20:15.361 { 00:20:15.361 "name": "BaseBdev3", 00:20:15.361 "uuid": "5f171989-f01e-47ab-9e8e-c05db7653cf6", 00:20:15.361 "is_configured": true, 00:20:15.361 "data_offset": 0, 00:20:15.361 "data_size": 65536 00:20:15.361 } 00:20:15.361 ] 00:20:15.361 }' 00:20:15.361 12:51:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:15.361 12:51:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.621 12:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:20:15.621 12:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:15.621 12:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:15.621 12:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:15.621 12:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:15.621 12:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:15.621 12:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:15.621 12:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.621 12:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.621 12:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:15.621 [2024-12-05 12:51:58.036984] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:15.621 12:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.621 12:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:15.621 "name": "Existed_Raid", 00:20:15.621 "aliases": [ 00:20:15.621 "329984ae-a98f-4f4a-b968-45b93a55c25c" 00:20:15.621 ], 00:20:15.621 "product_name": "Raid Volume", 00:20:15.621 "block_size": 512, 00:20:15.621 "num_blocks": 196608, 00:20:15.621 "uuid": "329984ae-a98f-4f4a-b968-45b93a55c25c", 00:20:15.621 "assigned_rate_limits": { 00:20:15.621 "rw_ios_per_sec": 0, 00:20:15.621 "rw_mbytes_per_sec": 0, 00:20:15.621 "r_mbytes_per_sec": 0, 00:20:15.621 "w_mbytes_per_sec": 0 00:20:15.621 }, 00:20:15.621 "claimed": false, 00:20:15.621 "zoned": false, 00:20:15.621 "supported_io_types": { 00:20:15.621 "read": true, 00:20:15.621 "write": true, 00:20:15.621 "unmap": true, 00:20:15.621 "flush": true, 00:20:15.621 "reset": true, 00:20:15.621 "nvme_admin": false, 00:20:15.621 "nvme_io": false, 00:20:15.621 "nvme_io_md": false, 00:20:15.621 "write_zeroes": true, 00:20:15.621 "zcopy": false, 00:20:15.621 "get_zone_info": false, 00:20:15.621 "zone_management": false, 00:20:15.621 "zone_append": false, 00:20:15.621 "compare": false, 00:20:15.621 "compare_and_write": false, 00:20:15.621 "abort": false, 00:20:15.621 "seek_hole": false, 00:20:15.621 "seek_data": false, 00:20:15.621 "copy": false, 00:20:15.621 "nvme_iov_md": false 00:20:15.621 }, 00:20:15.621 "memory_domains": [ 00:20:15.621 { 00:20:15.621 "dma_device_id": "system", 00:20:15.621 "dma_device_type": 1 00:20:15.621 }, 00:20:15.621 { 00:20:15.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:15.621 "dma_device_type": 2 00:20:15.621 }, 00:20:15.621 { 00:20:15.621 "dma_device_id": "system", 00:20:15.621 "dma_device_type": 1 00:20:15.621 }, 00:20:15.621 { 00:20:15.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:15.621 "dma_device_type": 2 00:20:15.621 }, 00:20:15.621 { 00:20:15.621 "dma_device_id": "system", 00:20:15.621 "dma_device_type": 1 00:20:15.621 }, 00:20:15.621 { 00:20:15.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:15.621 "dma_device_type": 2 00:20:15.621 } 00:20:15.621 ], 00:20:15.621 "driver_specific": { 00:20:15.621 "raid": { 00:20:15.621 "uuid": "329984ae-a98f-4f4a-b968-45b93a55c25c", 00:20:15.621 "strip_size_kb": 64, 00:20:15.621 "state": "online", 00:20:15.621 "raid_level": "raid0", 00:20:15.621 "superblock": false, 00:20:15.621 "num_base_bdevs": 3, 00:20:15.621 "num_base_bdevs_discovered": 3, 00:20:15.621 "num_base_bdevs_operational": 3, 00:20:15.621 "base_bdevs_list": [ 00:20:15.621 { 00:20:15.621 "name": "NewBaseBdev", 00:20:15.621 "uuid": "b40dcb9a-705b-42a1-b7e7-7dd78f5baf13", 00:20:15.621 "is_configured": true, 00:20:15.621 "data_offset": 0, 00:20:15.621 "data_size": 65536 00:20:15.621 }, 00:20:15.621 { 00:20:15.621 "name": "BaseBdev2", 00:20:15.621 "uuid": "58652880-1814-47a5-b001-f1386340b2c9", 00:20:15.621 "is_configured": true, 00:20:15.621 "data_offset": 0, 00:20:15.621 "data_size": 65536 00:20:15.621 }, 00:20:15.621 { 00:20:15.621 "name": "BaseBdev3", 00:20:15.621 "uuid": "5f171989-f01e-47ab-9e8e-c05db7653cf6", 00:20:15.621 "is_configured": true, 00:20:15.621 "data_offset": 0, 00:20:15.621 "data_size": 65536 00:20:15.621 } 00:20:15.621 ] 00:20:15.621 } 00:20:15.621 } 00:20:15.621 }' 00:20:15.621 12:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:15.621 12:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:20:15.621 BaseBdev2 00:20:15.621 BaseBdev3' 00:20:15.621 12:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:15.621 12:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:15.621 12:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:15.622 12:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:20:15.622 12:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.622 12:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:15.622 12:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.622 12:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.622 12:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:15.622 12:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:15.622 12:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:15.622 12:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:15.622 12:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:15.622 12:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.622 12:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.622 12:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.622 12:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:15.622 12:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:15.622 12:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:15.622 12:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:15.622 12:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:15.622 12:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.622 12:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.622 12:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.881 12:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:15.881 12:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:15.881 12:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:15.881 12:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.881 12:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.881 [2024-12-05 12:51:58.216748] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:15.881 [2024-12-05 12:51:58.216774] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:15.881 [2024-12-05 12:51:58.216841] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:15.881 [2024-12-05 12:51:58.216889] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:15.881 [2024-12-05 12:51:58.216900] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:20:15.881 12:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.881 12:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62261 00:20:15.881 12:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62261 ']' 00:20:15.881 12:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62261 00:20:15.881 12:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:20:15.881 12:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:15.881 12:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62261 00:20:15.881 killing process with pid 62261 00:20:15.881 12:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:15.881 12:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:15.881 12:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62261' 00:20:15.881 12:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62261 00:20:15.881 [2024-12-05 12:51:58.249266] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:15.881 12:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62261 00:20:15.881 [2024-12-05 12:51:58.400132] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:16.453 12:51:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:20:16.453 00:20:16.453 real 0m7.383s 00:20:16.453 user 0m11.965s 00:20:16.453 sys 0m1.163s 00:20:16.453 12:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:16.453 ************************************ 00:20:16.453 END TEST raid_state_function_test 00:20:16.453 ************************************ 00:20:16.453 12:51:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.453 12:51:59 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:20:16.453 12:51:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:16.453 12:51:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:16.453 12:51:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:16.453 ************************************ 00:20:16.453 START TEST raid_state_function_test_sb 00:20:16.453 ************************************ 00:20:16.453 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:20:16.453 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:20:16.453 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:20:16.453 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:16.453 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:16.453 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:16.453 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:16.453 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:16.453 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:16.453 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:16.453 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:16.453 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:16.453 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:16.453 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:20:16.453 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:16.453 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:16.453 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:16.453 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:16.453 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:16.453 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:16.453 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:16.453 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:16.453 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:20:16.453 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:20:16.453 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:20:16.453 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:16.453 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:16.453 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62855 00:20:16.453 Process raid pid: 62855 00:20:16.453 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62855' 00:20:16.453 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62855 00:20:16.453 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62855 ']' 00:20:16.453 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:16.453 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.453 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:16.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.453 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.453 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:16.453 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.714 [2024-12-05 12:51:59.092586] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:20:16.714 [2024-12-05 12:51:59.092705] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.714 [2024-12-05 12:51:59.248126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.974 [2024-12-05 12:51:59.332400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.974 [2024-12-05 12:51:59.443723] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:16.974 [2024-12-05 12:51:59.443769] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:17.544 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:17.544 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:20:17.544 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:17.544 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.544 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.544 [2024-12-05 12:51:59.940247] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:17.544 [2024-12-05 12:51:59.940296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:17.544 [2024-12-05 12:51:59.940305] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:17.544 [2024-12-05 12:51:59.940313] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:17.544 [2024-12-05 12:51:59.940318] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:17.544 [2024-12-05 12:51:59.940326] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:17.544 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.544 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:17.544 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:17.544 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:17.544 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:17.544 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:17.544 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:17.544 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.544 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.544 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.544 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.544 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.544 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:17.544 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.544 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.544 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.544 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.544 "name": "Existed_Raid", 00:20:17.544 "uuid": "6f3a6260-c141-4a98-848e-515d43b4226c", 00:20:17.544 "strip_size_kb": 64, 00:20:17.544 "state": "configuring", 00:20:17.544 "raid_level": "raid0", 00:20:17.544 "superblock": true, 00:20:17.544 "num_base_bdevs": 3, 00:20:17.544 "num_base_bdevs_discovered": 0, 00:20:17.544 "num_base_bdevs_operational": 3, 00:20:17.544 "base_bdevs_list": [ 00:20:17.544 { 00:20:17.544 "name": "BaseBdev1", 00:20:17.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.544 "is_configured": false, 00:20:17.544 "data_offset": 0, 00:20:17.544 "data_size": 0 00:20:17.544 }, 00:20:17.544 { 00:20:17.544 "name": "BaseBdev2", 00:20:17.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.544 "is_configured": false, 00:20:17.544 "data_offset": 0, 00:20:17.544 "data_size": 0 00:20:17.544 }, 00:20:17.544 { 00:20:17.544 "name": "BaseBdev3", 00:20:17.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.544 "is_configured": false, 00:20:17.544 "data_offset": 0, 00:20:17.544 "data_size": 0 00:20:17.544 } 00:20:17.544 ] 00:20:17.544 }' 00:20:17.544 12:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.544 12:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.804 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:17.804 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.804 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.804 [2024-12-05 12:52:00.264265] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:17.804 [2024-12-05 12:52:00.264300] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:17.804 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.804 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:17.804 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.804 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.804 [2024-12-05 12:52:00.272267] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:17.804 [2024-12-05 12:52:00.272306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:17.804 [2024-12-05 12:52:00.272314] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:17.804 [2024-12-05 12:52:00.272321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:17.804 [2024-12-05 12:52:00.272326] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:17.804 [2024-12-05 12:52:00.272333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:17.804 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.804 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:17.804 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.804 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.804 [2024-12-05 12:52:00.300409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:17.804 BaseBdev1 00:20:17.805 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.805 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:17.805 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:17.805 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:17.805 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:17.805 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:17.805 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:17.805 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:17.805 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.805 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.805 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.805 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:17.805 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.805 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.805 [ 00:20:17.805 { 00:20:17.805 "name": "BaseBdev1", 00:20:17.805 "aliases": [ 00:20:17.805 "154fab2e-ffd7-4a30-aa90-a363d49c67eb" 00:20:17.805 ], 00:20:17.805 "product_name": "Malloc disk", 00:20:17.805 "block_size": 512, 00:20:17.805 "num_blocks": 65536, 00:20:17.805 "uuid": "154fab2e-ffd7-4a30-aa90-a363d49c67eb", 00:20:17.805 "assigned_rate_limits": { 00:20:17.805 "rw_ios_per_sec": 0, 00:20:17.805 "rw_mbytes_per_sec": 0, 00:20:17.805 "r_mbytes_per_sec": 0, 00:20:17.805 "w_mbytes_per_sec": 0 00:20:17.805 }, 00:20:17.805 "claimed": true, 00:20:17.805 "claim_type": "exclusive_write", 00:20:17.805 "zoned": false, 00:20:17.805 "supported_io_types": { 00:20:17.805 "read": true, 00:20:17.805 "write": true, 00:20:17.805 "unmap": true, 00:20:17.805 "flush": true, 00:20:17.805 "reset": true, 00:20:17.805 "nvme_admin": false, 00:20:17.805 "nvme_io": false, 00:20:17.805 "nvme_io_md": false, 00:20:17.805 "write_zeroes": true, 00:20:17.805 "zcopy": true, 00:20:17.805 "get_zone_info": false, 00:20:17.805 "zone_management": false, 00:20:17.805 "zone_append": false, 00:20:17.805 "compare": false, 00:20:17.805 "compare_and_write": false, 00:20:17.805 "abort": true, 00:20:17.805 "seek_hole": false, 00:20:17.805 "seek_data": false, 00:20:17.805 "copy": true, 00:20:17.805 "nvme_iov_md": false 00:20:17.805 }, 00:20:17.805 "memory_domains": [ 00:20:17.805 { 00:20:17.805 "dma_device_id": "system", 00:20:17.805 "dma_device_type": 1 00:20:17.805 }, 00:20:17.805 { 00:20:17.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.805 "dma_device_type": 2 00:20:17.805 } 00:20:17.805 ], 00:20:17.805 "driver_specific": {} 00:20:17.805 } 00:20:17.805 ] 00:20:17.805 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.805 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:17.805 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:17.805 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:17.805 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:17.805 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:17.805 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:17.805 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:17.805 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.805 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.805 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.805 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.805 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.805 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:17.805 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.805 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.805 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.805 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.805 "name": "Existed_Raid", 00:20:17.805 "uuid": "c33039fa-6316-4b1f-9f3f-b845878658f5", 00:20:17.805 "strip_size_kb": 64, 00:20:17.805 "state": "configuring", 00:20:17.805 "raid_level": "raid0", 00:20:17.805 "superblock": true, 00:20:17.805 "num_base_bdevs": 3, 00:20:17.805 "num_base_bdevs_discovered": 1, 00:20:17.805 "num_base_bdevs_operational": 3, 00:20:17.805 "base_bdevs_list": [ 00:20:17.805 { 00:20:17.805 "name": "BaseBdev1", 00:20:17.805 "uuid": "154fab2e-ffd7-4a30-aa90-a363d49c67eb", 00:20:17.805 "is_configured": true, 00:20:17.805 "data_offset": 2048, 00:20:17.805 "data_size": 63488 00:20:17.805 }, 00:20:17.805 { 00:20:17.805 "name": "BaseBdev2", 00:20:17.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.805 "is_configured": false, 00:20:17.805 "data_offset": 0, 00:20:17.805 "data_size": 0 00:20:17.805 }, 00:20:17.805 { 00:20:17.805 "name": "BaseBdev3", 00:20:17.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.805 "is_configured": false, 00:20:17.805 "data_offset": 0, 00:20:17.806 "data_size": 0 00:20:17.806 } 00:20:17.806 ] 00:20:17.806 }' 00:20:17.806 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.806 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.065 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:18.065 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.065 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.065 [2024-12-05 12:52:00.636519] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:18.065 [2024-12-05 12:52:00.636568] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:18.065 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.065 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:18.065 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.065 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.065 [2024-12-05 12:52:00.644578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:18.065 [2024-12-05 12:52:00.646145] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:18.065 [2024-12-05 12:52:00.646182] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:18.065 [2024-12-05 12:52:00.646190] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:18.065 [2024-12-05 12:52:00.646198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:18.065 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.324 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:18.324 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:18.324 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:18.324 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:18.324 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:18.324 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:18.324 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:18.324 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:18.324 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:18.324 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:18.324 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:18.324 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:18.324 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.324 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.324 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.324 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:18.324 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.324 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:18.324 "name": "Existed_Raid", 00:20:18.324 "uuid": "dacb9701-11a2-4988-9f4d-c78c6dd564bf", 00:20:18.324 "strip_size_kb": 64, 00:20:18.324 "state": "configuring", 00:20:18.324 "raid_level": "raid0", 00:20:18.324 "superblock": true, 00:20:18.324 "num_base_bdevs": 3, 00:20:18.324 "num_base_bdevs_discovered": 1, 00:20:18.324 "num_base_bdevs_operational": 3, 00:20:18.324 "base_bdevs_list": [ 00:20:18.324 { 00:20:18.324 "name": "BaseBdev1", 00:20:18.324 "uuid": "154fab2e-ffd7-4a30-aa90-a363d49c67eb", 00:20:18.324 "is_configured": true, 00:20:18.324 "data_offset": 2048, 00:20:18.324 "data_size": 63488 00:20:18.324 }, 00:20:18.324 { 00:20:18.324 "name": "BaseBdev2", 00:20:18.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.324 "is_configured": false, 00:20:18.324 "data_offset": 0, 00:20:18.324 "data_size": 0 00:20:18.324 }, 00:20:18.324 { 00:20:18.324 "name": "BaseBdev3", 00:20:18.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.324 "is_configured": false, 00:20:18.324 "data_offset": 0, 00:20:18.324 "data_size": 0 00:20:18.324 } 00:20:18.324 ] 00:20:18.324 }' 00:20:18.324 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:18.324 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.586 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:18.586 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.586 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.586 [2024-12-05 12:52:00.995157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:18.586 BaseBdev2 00:20:18.586 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.586 12:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:18.586 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:18.586 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:18.586 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:18.586 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:18.586 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:18.586 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:18.586 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.586 12:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.586 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.586 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:18.586 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.586 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.586 [ 00:20:18.586 { 00:20:18.586 "name": "BaseBdev2", 00:20:18.586 "aliases": [ 00:20:18.586 "829aeaf8-42a3-4dc7-8013-e8ed642acf73" 00:20:18.586 ], 00:20:18.586 "product_name": "Malloc disk", 00:20:18.586 "block_size": 512, 00:20:18.586 "num_blocks": 65536, 00:20:18.586 "uuid": "829aeaf8-42a3-4dc7-8013-e8ed642acf73", 00:20:18.586 "assigned_rate_limits": { 00:20:18.586 "rw_ios_per_sec": 0, 00:20:18.586 "rw_mbytes_per_sec": 0, 00:20:18.586 "r_mbytes_per_sec": 0, 00:20:18.586 "w_mbytes_per_sec": 0 00:20:18.586 }, 00:20:18.586 "claimed": true, 00:20:18.586 "claim_type": "exclusive_write", 00:20:18.586 "zoned": false, 00:20:18.586 "supported_io_types": { 00:20:18.586 "read": true, 00:20:18.586 "write": true, 00:20:18.586 "unmap": true, 00:20:18.586 "flush": true, 00:20:18.586 "reset": true, 00:20:18.586 "nvme_admin": false, 00:20:18.586 "nvme_io": false, 00:20:18.586 "nvme_io_md": false, 00:20:18.586 "write_zeroes": true, 00:20:18.586 "zcopy": true, 00:20:18.586 "get_zone_info": false, 00:20:18.586 "zone_management": false, 00:20:18.586 "zone_append": false, 00:20:18.586 "compare": false, 00:20:18.586 "compare_and_write": false, 00:20:18.586 "abort": true, 00:20:18.586 "seek_hole": false, 00:20:18.586 "seek_data": false, 00:20:18.586 "copy": true, 00:20:18.586 "nvme_iov_md": false 00:20:18.586 }, 00:20:18.586 "memory_domains": [ 00:20:18.586 { 00:20:18.586 "dma_device_id": "system", 00:20:18.586 "dma_device_type": 1 00:20:18.586 }, 00:20:18.586 { 00:20:18.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:18.586 "dma_device_type": 2 00:20:18.586 } 00:20:18.586 ], 00:20:18.586 "driver_specific": {} 00:20:18.586 } 00:20:18.586 ] 00:20:18.586 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.586 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:18.586 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:18.586 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:18.587 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:18.587 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:18.587 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:18.587 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:18.587 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:18.587 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:18.587 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:18.587 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:18.587 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:18.587 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:18.587 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:18.587 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.587 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.587 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.587 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.587 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:18.587 "name": "Existed_Raid", 00:20:18.587 "uuid": "dacb9701-11a2-4988-9f4d-c78c6dd564bf", 00:20:18.587 "strip_size_kb": 64, 00:20:18.587 "state": "configuring", 00:20:18.587 "raid_level": "raid0", 00:20:18.587 "superblock": true, 00:20:18.587 "num_base_bdevs": 3, 00:20:18.587 "num_base_bdevs_discovered": 2, 00:20:18.587 "num_base_bdevs_operational": 3, 00:20:18.587 "base_bdevs_list": [ 00:20:18.587 { 00:20:18.587 "name": "BaseBdev1", 00:20:18.587 "uuid": "154fab2e-ffd7-4a30-aa90-a363d49c67eb", 00:20:18.587 "is_configured": true, 00:20:18.587 "data_offset": 2048, 00:20:18.587 "data_size": 63488 00:20:18.587 }, 00:20:18.587 { 00:20:18.587 "name": "BaseBdev2", 00:20:18.587 "uuid": "829aeaf8-42a3-4dc7-8013-e8ed642acf73", 00:20:18.587 "is_configured": true, 00:20:18.587 "data_offset": 2048, 00:20:18.587 "data_size": 63488 00:20:18.587 }, 00:20:18.587 { 00:20:18.587 "name": "BaseBdev3", 00:20:18.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.587 "is_configured": false, 00:20:18.587 "data_offset": 0, 00:20:18.587 "data_size": 0 00:20:18.587 } 00:20:18.587 ] 00:20:18.587 }' 00:20:18.587 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:18.587 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.849 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:18.849 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.849 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.849 [2024-12-05 12:52:01.404191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:18.849 [2024-12-05 12:52:01.404389] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:18.849 [2024-12-05 12:52:01.404405] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:18.849 [2024-12-05 12:52:01.404634] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:18.849 BaseBdev3 00:20:18.849 [2024-12-05 12:52:01.404749] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:18.849 [2024-12-05 12:52:01.404756] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:18.849 [2024-12-05 12:52:01.404861] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:18.849 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.849 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:20:18.849 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:18.849 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:18.849 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:18.849 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:18.849 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:18.849 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:18.849 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.849 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.849 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.849 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:18.849 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.849 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.849 [ 00:20:18.849 { 00:20:18.849 "name": "BaseBdev3", 00:20:18.849 "aliases": [ 00:20:18.849 "3ecd2748-98f6-4e14-9a05-e7792605c65f" 00:20:18.849 ], 00:20:18.849 "product_name": "Malloc disk", 00:20:18.849 "block_size": 512, 00:20:18.849 "num_blocks": 65536, 00:20:18.849 "uuid": "3ecd2748-98f6-4e14-9a05-e7792605c65f", 00:20:18.849 "assigned_rate_limits": { 00:20:18.849 "rw_ios_per_sec": 0, 00:20:18.849 "rw_mbytes_per_sec": 0, 00:20:18.849 "r_mbytes_per_sec": 0, 00:20:18.849 "w_mbytes_per_sec": 0 00:20:18.849 }, 00:20:18.849 "claimed": true, 00:20:18.849 "claim_type": "exclusive_write", 00:20:18.849 "zoned": false, 00:20:18.849 "supported_io_types": { 00:20:18.849 "read": true, 00:20:18.849 "write": true, 00:20:18.849 "unmap": true, 00:20:18.849 "flush": true, 00:20:18.849 "reset": true, 00:20:18.849 "nvme_admin": false, 00:20:18.849 "nvme_io": false, 00:20:18.849 "nvme_io_md": false, 00:20:18.849 "write_zeroes": true, 00:20:18.849 "zcopy": true, 00:20:18.849 "get_zone_info": false, 00:20:18.849 "zone_management": false, 00:20:18.849 "zone_append": false, 00:20:18.849 "compare": false, 00:20:18.849 "compare_and_write": false, 00:20:18.849 "abort": true, 00:20:18.849 "seek_hole": false, 00:20:18.849 "seek_data": false, 00:20:18.849 "copy": true, 00:20:18.849 "nvme_iov_md": false 00:20:18.849 }, 00:20:18.849 "memory_domains": [ 00:20:18.849 { 00:20:18.849 "dma_device_id": "system", 00:20:18.849 "dma_device_type": 1 00:20:18.849 }, 00:20:18.849 { 00:20:18.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:18.849 "dma_device_type": 2 00:20:18.849 } 00:20:18.849 ], 00:20:18.849 "driver_specific": {} 00:20:18.849 } 00:20:18.849 ] 00:20:18.849 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.849 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:18.849 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:18.849 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:18.849 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:20:18.849 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:19.109 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:19.109 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:19.109 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:19.109 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:19.109 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:19.109 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:19.109 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:19.109 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:19.109 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.109 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:19.109 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.109 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.109 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.109 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:19.109 "name": "Existed_Raid", 00:20:19.109 "uuid": "dacb9701-11a2-4988-9f4d-c78c6dd564bf", 00:20:19.109 "strip_size_kb": 64, 00:20:19.109 "state": "online", 00:20:19.109 "raid_level": "raid0", 00:20:19.109 "superblock": true, 00:20:19.109 "num_base_bdevs": 3, 00:20:19.109 "num_base_bdevs_discovered": 3, 00:20:19.109 "num_base_bdevs_operational": 3, 00:20:19.109 "base_bdevs_list": [ 00:20:19.109 { 00:20:19.109 "name": "BaseBdev1", 00:20:19.109 "uuid": "154fab2e-ffd7-4a30-aa90-a363d49c67eb", 00:20:19.109 "is_configured": true, 00:20:19.109 "data_offset": 2048, 00:20:19.109 "data_size": 63488 00:20:19.109 }, 00:20:19.109 { 00:20:19.109 "name": "BaseBdev2", 00:20:19.109 "uuid": "829aeaf8-42a3-4dc7-8013-e8ed642acf73", 00:20:19.109 "is_configured": true, 00:20:19.109 "data_offset": 2048, 00:20:19.109 "data_size": 63488 00:20:19.109 }, 00:20:19.109 { 00:20:19.109 "name": "BaseBdev3", 00:20:19.109 "uuid": "3ecd2748-98f6-4e14-9a05-e7792605c65f", 00:20:19.109 "is_configured": true, 00:20:19.109 "data_offset": 2048, 00:20:19.109 "data_size": 63488 00:20:19.109 } 00:20:19.109 ] 00:20:19.109 }' 00:20:19.109 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:19.109 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.370 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:19.370 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:19.370 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:19.370 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:19.370 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:20:19.370 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:19.370 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:19.370 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:19.370 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.370 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.370 [2024-12-05 12:52:01.760581] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:19.370 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.370 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:19.370 "name": "Existed_Raid", 00:20:19.370 "aliases": [ 00:20:19.370 "dacb9701-11a2-4988-9f4d-c78c6dd564bf" 00:20:19.370 ], 00:20:19.370 "product_name": "Raid Volume", 00:20:19.370 "block_size": 512, 00:20:19.370 "num_blocks": 190464, 00:20:19.370 "uuid": "dacb9701-11a2-4988-9f4d-c78c6dd564bf", 00:20:19.370 "assigned_rate_limits": { 00:20:19.370 "rw_ios_per_sec": 0, 00:20:19.370 "rw_mbytes_per_sec": 0, 00:20:19.370 "r_mbytes_per_sec": 0, 00:20:19.370 "w_mbytes_per_sec": 0 00:20:19.370 }, 00:20:19.370 "claimed": false, 00:20:19.370 "zoned": false, 00:20:19.370 "supported_io_types": { 00:20:19.370 "read": true, 00:20:19.370 "write": true, 00:20:19.370 "unmap": true, 00:20:19.370 "flush": true, 00:20:19.370 "reset": true, 00:20:19.370 "nvme_admin": false, 00:20:19.370 "nvme_io": false, 00:20:19.370 "nvme_io_md": false, 00:20:19.370 "write_zeroes": true, 00:20:19.370 "zcopy": false, 00:20:19.370 "get_zone_info": false, 00:20:19.370 "zone_management": false, 00:20:19.370 "zone_append": false, 00:20:19.370 "compare": false, 00:20:19.370 "compare_and_write": false, 00:20:19.370 "abort": false, 00:20:19.370 "seek_hole": false, 00:20:19.370 "seek_data": false, 00:20:19.370 "copy": false, 00:20:19.370 "nvme_iov_md": false 00:20:19.370 }, 00:20:19.370 "memory_domains": [ 00:20:19.370 { 00:20:19.370 "dma_device_id": "system", 00:20:19.370 "dma_device_type": 1 00:20:19.370 }, 00:20:19.370 { 00:20:19.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:19.370 "dma_device_type": 2 00:20:19.370 }, 00:20:19.370 { 00:20:19.370 "dma_device_id": "system", 00:20:19.370 "dma_device_type": 1 00:20:19.370 }, 00:20:19.370 { 00:20:19.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:19.370 "dma_device_type": 2 00:20:19.370 }, 00:20:19.370 { 00:20:19.370 "dma_device_id": "system", 00:20:19.370 "dma_device_type": 1 00:20:19.370 }, 00:20:19.370 { 00:20:19.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:19.370 "dma_device_type": 2 00:20:19.370 } 00:20:19.370 ], 00:20:19.370 "driver_specific": { 00:20:19.370 "raid": { 00:20:19.370 "uuid": "dacb9701-11a2-4988-9f4d-c78c6dd564bf", 00:20:19.370 "strip_size_kb": 64, 00:20:19.370 "state": "online", 00:20:19.370 "raid_level": "raid0", 00:20:19.370 "superblock": true, 00:20:19.370 "num_base_bdevs": 3, 00:20:19.370 "num_base_bdevs_discovered": 3, 00:20:19.370 "num_base_bdevs_operational": 3, 00:20:19.370 "base_bdevs_list": [ 00:20:19.370 { 00:20:19.370 "name": "BaseBdev1", 00:20:19.370 "uuid": "154fab2e-ffd7-4a30-aa90-a363d49c67eb", 00:20:19.370 "is_configured": true, 00:20:19.370 "data_offset": 2048, 00:20:19.370 "data_size": 63488 00:20:19.370 }, 00:20:19.370 { 00:20:19.370 "name": "BaseBdev2", 00:20:19.370 "uuid": "829aeaf8-42a3-4dc7-8013-e8ed642acf73", 00:20:19.370 "is_configured": true, 00:20:19.370 "data_offset": 2048, 00:20:19.370 "data_size": 63488 00:20:19.370 }, 00:20:19.370 { 00:20:19.370 "name": "BaseBdev3", 00:20:19.370 "uuid": "3ecd2748-98f6-4e14-9a05-e7792605c65f", 00:20:19.370 "is_configured": true, 00:20:19.370 "data_offset": 2048, 00:20:19.370 "data_size": 63488 00:20:19.370 } 00:20:19.370 ] 00:20:19.370 } 00:20:19.370 } 00:20:19.370 }' 00:20:19.370 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:19.370 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:19.370 BaseBdev2 00:20:19.370 BaseBdev3' 00:20:19.370 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:19.370 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:19.371 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:19.371 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:19.371 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:19.371 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.371 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.371 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.371 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:19.371 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:19.371 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:19.371 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:19.371 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:19.371 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.371 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.371 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.371 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:19.371 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:19.371 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:19.371 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:19.371 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.371 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.371 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:19.371 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.371 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:19.371 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:19.371 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:19.371 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.371 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.371 [2024-12-05 12:52:01.936382] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:19.371 [2024-12-05 12:52:01.936411] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:19.371 [2024-12-05 12:52:01.936455] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:19.632 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.632 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:19.632 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:20:19.632 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:19.632 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:20:19.632 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:20:19.632 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:20:19.632 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:19.632 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:20:19.632 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:19.632 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:19.632 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:19.632 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:19.632 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:19.632 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:19.632 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:19.632 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.632 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.632 12:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:19.632 12:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.632 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.632 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:19.632 "name": "Existed_Raid", 00:20:19.632 "uuid": "dacb9701-11a2-4988-9f4d-c78c6dd564bf", 00:20:19.632 "strip_size_kb": 64, 00:20:19.632 "state": "offline", 00:20:19.632 "raid_level": "raid0", 00:20:19.632 "superblock": true, 00:20:19.632 "num_base_bdevs": 3, 00:20:19.632 "num_base_bdevs_discovered": 2, 00:20:19.632 "num_base_bdevs_operational": 2, 00:20:19.632 "base_bdevs_list": [ 00:20:19.632 { 00:20:19.632 "name": null, 00:20:19.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.632 "is_configured": false, 00:20:19.632 "data_offset": 0, 00:20:19.632 "data_size": 63488 00:20:19.632 }, 00:20:19.632 { 00:20:19.632 "name": "BaseBdev2", 00:20:19.632 "uuid": "829aeaf8-42a3-4dc7-8013-e8ed642acf73", 00:20:19.632 "is_configured": true, 00:20:19.632 "data_offset": 2048, 00:20:19.632 "data_size": 63488 00:20:19.632 }, 00:20:19.632 { 00:20:19.632 "name": "BaseBdev3", 00:20:19.632 "uuid": "3ecd2748-98f6-4e14-9a05-e7792605c65f", 00:20:19.632 "is_configured": true, 00:20:19.632 "data_offset": 2048, 00:20:19.632 "data_size": 63488 00:20:19.632 } 00:20:19.632 ] 00:20:19.632 }' 00:20:19.632 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:19.632 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.894 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:19.894 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:19.894 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.894 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:19.894 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.894 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.894 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.894 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:19.894 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:19.894 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:19.894 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.894 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.894 [2024-12-05 12:52:02.317206] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:19.894 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.894 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:19.894 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:19.894 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.894 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:19.894 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.894 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.894 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.894 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:19.894 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:19.894 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:20:19.894 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.894 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.894 [2024-12-05 12:52:02.400597] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:19.894 [2024-12-05 12:52:02.400640] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:19.894 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.894 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:19.894 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:19.894 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.894 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:19.894 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.894 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.894 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.155 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:20.155 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:20.155 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:20:20.155 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:20.155 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:20.155 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:20.155 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.155 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.155 BaseBdev2 00:20:20.155 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.155 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:20:20.155 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:20.155 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:20.155 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:20.155 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:20.155 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:20.155 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:20.155 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.155 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.155 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.155 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:20.155 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.155 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.155 [ 00:20:20.155 { 00:20:20.155 "name": "BaseBdev2", 00:20:20.155 "aliases": [ 00:20:20.155 "86ca2d0d-209d-4d46-bad4-2fabbbc35299" 00:20:20.155 ], 00:20:20.155 "product_name": "Malloc disk", 00:20:20.155 "block_size": 512, 00:20:20.155 "num_blocks": 65536, 00:20:20.155 "uuid": "86ca2d0d-209d-4d46-bad4-2fabbbc35299", 00:20:20.155 "assigned_rate_limits": { 00:20:20.155 "rw_ios_per_sec": 0, 00:20:20.155 "rw_mbytes_per_sec": 0, 00:20:20.155 "r_mbytes_per_sec": 0, 00:20:20.155 "w_mbytes_per_sec": 0 00:20:20.155 }, 00:20:20.155 "claimed": false, 00:20:20.155 "zoned": false, 00:20:20.155 "supported_io_types": { 00:20:20.155 "read": true, 00:20:20.155 "write": true, 00:20:20.155 "unmap": true, 00:20:20.155 "flush": true, 00:20:20.155 "reset": true, 00:20:20.155 "nvme_admin": false, 00:20:20.155 "nvme_io": false, 00:20:20.155 "nvme_io_md": false, 00:20:20.155 "write_zeroes": true, 00:20:20.155 "zcopy": true, 00:20:20.155 "get_zone_info": false, 00:20:20.155 "zone_management": false, 00:20:20.155 "zone_append": false, 00:20:20.155 "compare": false, 00:20:20.155 "compare_and_write": false, 00:20:20.155 "abort": true, 00:20:20.155 "seek_hole": false, 00:20:20.155 "seek_data": false, 00:20:20.155 "copy": true, 00:20:20.155 "nvme_iov_md": false 00:20:20.155 }, 00:20:20.155 "memory_domains": [ 00:20:20.155 { 00:20:20.155 "dma_device_id": "system", 00:20:20.155 "dma_device_type": 1 00:20:20.155 }, 00:20:20.155 { 00:20:20.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:20.155 "dma_device_type": 2 00:20:20.155 } 00:20:20.155 ], 00:20:20.155 "driver_specific": {} 00:20:20.155 } 00:20:20.155 ] 00:20:20.155 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.155 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:20.155 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:20.155 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:20.155 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:20.155 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.155 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.155 BaseBdev3 00:20:20.155 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.155 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:20:20.155 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:20.155 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:20.155 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:20.155 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:20.155 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:20.155 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:20.156 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.156 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.156 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.156 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:20.156 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.156 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.156 [ 00:20:20.156 { 00:20:20.156 "name": "BaseBdev3", 00:20:20.156 "aliases": [ 00:20:20.156 "a35fcd5c-29a2-48f8-9a74-673117f827ec" 00:20:20.156 ], 00:20:20.156 "product_name": "Malloc disk", 00:20:20.156 "block_size": 512, 00:20:20.156 "num_blocks": 65536, 00:20:20.156 "uuid": "a35fcd5c-29a2-48f8-9a74-673117f827ec", 00:20:20.156 "assigned_rate_limits": { 00:20:20.156 "rw_ios_per_sec": 0, 00:20:20.156 "rw_mbytes_per_sec": 0, 00:20:20.156 "r_mbytes_per_sec": 0, 00:20:20.156 "w_mbytes_per_sec": 0 00:20:20.156 }, 00:20:20.156 "claimed": false, 00:20:20.156 "zoned": false, 00:20:20.156 "supported_io_types": { 00:20:20.156 "read": true, 00:20:20.156 "write": true, 00:20:20.156 "unmap": true, 00:20:20.156 "flush": true, 00:20:20.156 "reset": true, 00:20:20.156 "nvme_admin": false, 00:20:20.156 "nvme_io": false, 00:20:20.156 "nvme_io_md": false, 00:20:20.156 "write_zeroes": true, 00:20:20.156 "zcopy": true, 00:20:20.156 "get_zone_info": false, 00:20:20.156 "zone_management": false, 00:20:20.156 "zone_append": false, 00:20:20.156 "compare": false, 00:20:20.156 "compare_and_write": false, 00:20:20.156 "abort": true, 00:20:20.156 "seek_hole": false, 00:20:20.156 "seek_data": false, 00:20:20.156 "copy": true, 00:20:20.156 "nvme_iov_md": false 00:20:20.156 }, 00:20:20.156 "memory_domains": [ 00:20:20.156 { 00:20:20.156 "dma_device_id": "system", 00:20:20.156 "dma_device_type": 1 00:20:20.156 }, 00:20:20.156 { 00:20:20.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:20.156 "dma_device_type": 2 00:20:20.156 } 00:20:20.156 ], 00:20:20.156 "driver_specific": {} 00:20:20.156 } 00:20:20.156 ] 00:20:20.156 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.156 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:20.156 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:20.156 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:20.156 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:20.156 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.156 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.156 [2024-12-05 12:52:02.575666] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:20.156 [2024-12-05 12:52:02.575706] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:20.156 [2024-12-05 12:52:02.575723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:20.156 [2024-12-05 12:52:02.577187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:20.156 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.156 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:20.156 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:20.156 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:20.156 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:20.156 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:20.156 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:20.156 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:20.156 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:20.156 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:20.156 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:20.156 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:20.156 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.156 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.156 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.156 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.156 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:20.156 "name": "Existed_Raid", 00:20:20.156 "uuid": "81e19284-6764-4b43-afea-402dcafbb04a", 00:20:20.156 "strip_size_kb": 64, 00:20:20.156 "state": "configuring", 00:20:20.156 "raid_level": "raid0", 00:20:20.156 "superblock": true, 00:20:20.156 "num_base_bdevs": 3, 00:20:20.156 "num_base_bdevs_discovered": 2, 00:20:20.156 "num_base_bdevs_operational": 3, 00:20:20.156 "base_bdevs_list": [ 00:20:20.156 { 00:20:20.156 "name": "BaseBdev1", 00:20:20.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:20.156 "is_configured": false, 00:20:20.156 "data_offset": 0, 00:20:20.156 "data_size": 0 00:20:20.156 }, 00:20:20.156 { 00:20:20.156 "name": "BaseBdev2", 00:20:20.156 "uuid": "86ca2d0d-209d-4d46-bad4-2fabbbc35299", 00:20:20.156 "is_configured": true, 00:20:20.156 "data_offset": 2048, 00:20:20.156 "data_size": 63488 00:20:20.156 }, 00:20:20.156 { 00:20:20.156 "name": "BaseBdev3", 00:20:20.156 "uuid": "a35fcd5c-29a2-48f8-9a74-673117f827ec", 00:20:20.156 "is_configured": true, 00:20:20.156 "data_offset": 2048, 00:20:20.156 "data_size": 63488 00:20:20.156 } 00:20:20.156 ] 00:20:20.156 }' 00:20:20.156 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:20.156 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.417 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:20.417 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.417 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.417 [2024-12-05 12:52:02.891734] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:20.417 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.417 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:20.417 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:20.417 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:20.417 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:20.417 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:20.417 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:20.417 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:20.417 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:20.417 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:20.417 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:20.417 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.417 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.417 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.417 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:20.417 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.417 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:20.417 "name": "Existed_Raid", 00:20:20.417 "uuid": "81e19284-6764-4b43-afea-402dcafbb04a", 00:20:20.417 "strip_size_kb": 64, 00:20:20.417 "state": "configuring", 00:20:20.417 "raid_level": "raid0", 00:20:20.417 "superblock": true, 00:20:20.417 "num_base_bdevs": 3, 00:20:20.417 "num_base_bdevs_discovered": 1, 00:20:20.417 "num_base_bdevs_operational": 3, 00:20:20.417 "base_bdevs_list": [ 00:20:20.417 { 00:20:20.417 "name": "BaseBdev1", 00:20:20.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:20.417 "is_configured": false, 00:20:20.417 "data_offset": 0, 00:20:20.417 "data_size": 0 00:20:20.417 }, 00:20:20.417 { 00:20:20.417 "name": null, 00:20:20.417 "uuid": "86ca2d0d-209d-4d46-bad4-2fabbbc35299", 00:20:20.417 "is_configured": false, 00:20:20.417 "data_offset": 0, 00:20:20.417 "data_size": 63488 00:20:20.417 }, 00:20:20.417 { 00:20:20.417 "name": "BaseBdev3", 00:20:20.417 "uuid": "a35fcd5c-29a2-48f8-9a74-673117f827ec", 00:20:20.417 "is_configured": true, 00:20:20.417 "data_offset": 2048, 00:20:20.417 "data_size": 63488 00:20:20.417 } 00:20:20.417 ] 00:20:20.417 }' 00:20:20.417 12:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:20.417 12:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.678 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.678 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.678 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.678 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:20.678 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.678 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:20:20.678 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:20.678 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.678 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.938 [2024-12-05 12:52:03.270104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:20.938 BaseBdev1 00:20:20.938 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.938 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:20:20.938 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:20.938 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:20.938 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:20.938 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:20.938 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:20.939 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:20.939 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.939 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.939 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.939 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:20.939 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.939 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.939 [ 00:20:20.939 { 00:20:20.939 "name": "BaseBdev1", 00:20:20.939 "aliases": [ 00:20:20.939 "093eefae-0d8d-42af-b496-835096ff4dcd" 00:20:20.939 ], 00:20:20.939 "product_name": "Malloc disk", 00:20:20.939 "block_size": 512, 00:20:20.939 "num_blocks": 65536, 00:20:20.939 "uuid": "093eefae-0d8d-42af-b496-835096ff4dcd", 00:20:20.939 "assigned_rate_limits": { 00:20:20.939 "rw_ios_per_sec": 0, 00:20:20.939 "rw_mbytes_per_sec": 0, 00:20:20.939 "r_mbytes_per_sec": 0, 00:20:20.939 "w_mbytes_per_sec": 0 00:20:20.939 }, 00:20:20.939 "claimed": true, 00:20:20.939 "claim_type": "exclusive_write", 00:20:20.939 "zoned": false, 00:20:20.939 "supported_io_types": { 00:20:20.939 "read": true, 00:20:20.939 "write": true, 00:20:20.939 "unmap": true, 00:20:20.939 "flush": true, 00:20:20.939 "reset": true, 00:20:20.939 "nvme_admin": false, 00:20:20.939 "nvme_io": false, 00:20:20.939 "nvme_io_md": false, 00:20:20.939 "write_zeroes": true, 00:20:20.939 "zcopy": true, 00:20:20.939 "get_zone_info": false, 00:20:20.939 "zone_management": false, 00:20:20.939 "zone_append": false, 00:20:20.939 "compare": false, 00:20:20.939 "compare_and_write": false, 00:20:20.939 "abort": true, 00:20:20.939 "seek_hole": false, 00:20:20.939 "seek_data": false, 00:20:20.939 "copy": true, 00:20:20.939 "nvme_iov_md": false 00:20:20.939 }, 00:20:20.939 "memory_domains": [ 00:20:20.939 { 00:20:20.939 "dma_device_id": "system", 00:20:20.939 "dma_device_type": 1 00:20:20.939 }, 00:20:20.939 { 00:20:20.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:20.939 "dma_device_type": 2 00:20:20.939 } 00:20:20.939 ], 00:20:20.939 "driver_specific": {} 00:20:20.939 } 00:20:20.939 ] 00:20:20.939 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.939 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:20.939 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:20.939 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:20.939 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:20.939 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:20.939 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:20.939 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:20.939 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:20.939 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:20.939 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:20.939 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:20.939 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.939 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.940 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.940 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:20.940 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.940 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:20.940 "name": "Existed_Raid", 00:20:20.940 "uuid": "81e19284-6764-4b43-afea-402dcafbb04a", 00:20:20.940 "strip_size_kb": 64, 00:20:20.940 "state": "configuring", 00:20:20.940 "raid_level": "raid0", 00:20:20.940 "superblock": true, 00:20:20.940 "num_base_bdevs": 3, 00:20:20.940 "num_base_bdevs_discovered": 2, 00:20:20.940 "num_base_bdevs_operational": 3, 00:20:20.940 "base_bdevs_list": [ 00:20:20.940 { 00:20:20.940 "name": "BaseBdev1", 00:20:20.940 "uuid": "093eefae-0d8d-42af-b496-835096ff4dcd", 00:20:20.940 "is_configured": true, 00:20:20.940 "data_offset": 2048, 00:20:20.940 "data_size": 63488 00:20:20.940 }, 00:20:20.940 { 00:20:20.940 "name": null, 00:20:20.940 "uuid": "86ca2d0d-209d-4d46-bad4-2fabbbc35299", 00:20:20.940 "is_configured": false, 00:20:20.940 "data_offset": 0, 00:20:20.940 "data_size": 63488 00:20:20.940 }, 00:20:20.940 { 00:20:20.940 "name": "BaseBdev3", 00:20:20.940 "uuid": "a35fcd5c-29a2-48f8-9a74-673117f827ec", 00:20:20.940 "is_configured": true, 00:20:20.940 "data_offset": 2048, 00:20:20.940 "data_size": 63488 00:20:20.940 } 00:20:20.940 ] 00:20:20.940 }' 00:20:20.940 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:20.940 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.201 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.201 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:21.201 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.201 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.201 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.201 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:20:21.201 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:20:21.201 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.201 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.201 [2024-12-05 12:52:03.630216] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:21.201 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.201 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:21.201 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:21.201 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:21.201 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:21.201 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:21.201 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:21.201 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:21.201 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:21.201 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:21.201 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:21.201 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.201 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.201 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.201 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:21.201 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.201 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:21.201 "name": "Existed_Raid", 00:20:21.201 "uuid": "81e19284-6764-4b43-afea-402dcafbb04a", 00:20:21.201 "strip_size_kb": 64, 00:20:21.201 "state": "configuring", 00:20:21.201 "raid_level": "raid0", 00:20:21.201 "superblock": true, 00:20:21.201 "num_base_bdevs": 3, 00:20:21.201 "num_base_bdevs_discovered": 1, 00:20:21.201 "num_base_bdevs_operational": 3, 00:20:21.201 "base_bdevs_list": [ 00:20:21.201 { 00:20:21.201 "name": "BaseBdev1", 00:20:21.201 "uuid": "093eefae-0d8d-42af-b496-835096ff4dcd", 00:20:21.201 "is_configured": true, 00:20:21.201 "data_offset": 2048, 00:20:21.201 "data_size": 63488 00:20:21.201 }, 00:20:21.201 { 00:20:21.201 "name": null, 00:20:21.201 "uuid": "86ca2d0d-209d-4d46-bad4-2fabbbc35299", 00:20:21.201 "is_configured": false, 00:20:21.201 "data_offset": 0, 00:20:21.201 "data_size": 63488 00:20:21.201 }, 00:20:21.201 { 00:20:21.201 "name": null, 00:20:21.201 "uuid": "a35fcd5c-29a2-48f8-9a74-673117f827ec", 00:20:21.201 "is_configured": false, 00:20:21.201 "data_offset": 0, 00:20:21.201 "data_size": 63488 00:20:21.201 } 00:20:21.201 ] 00:20:21.201 }' 00:20:21.201 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:21.201 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.461 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.461 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.461 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.461 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:21.461 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.461 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:20:21.461 12:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:21.461 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.461 12:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.462 [2024-12-05 12:52:03.998306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:21.462 12:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.462 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:21.462 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:21.462 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:21.462 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:21.462 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:21.462 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:21.462 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:21.462 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:21.462 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:21.462 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:21.462 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.462 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:21.462 12:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.462 12:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.462 12:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.462 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:21.462 "name": "Existed_Raid", 00:20:21.462 "uuid": "81e19284-6764-4b43-afea-402dcafbb04a", 00:20:21.462 "strip_size_kb": 64, 00:20:21.462 "state": "configuring", 00:20:21.462 "raid_level": "raid0", 00:20:21.462 "superblock": true, 00:20:21.462 "num_base_bdevs": 3, 00:20:21.462 "num_base_bdevs_discovered": 2, 00:20:21.462 "num_base_bdevs_operational": 3, 00:20:21.462 "base_bdevs_list": [ 00:20:21.462 { 00:20:21.462 "name": "BaseBdev1", 00:20:21.462 "uuid": "093eefae-0d8d-42af-b496-835096ff4dcd", 00:20:21.462 "is_configured": true, 00:20:21.462 "data_offset": 2048, 00:20:21.462 "data_size": 63488 00:20:21.462 }, 00:20:21.462 { 00:20:21.462 "name": null, 00:20:21.462 "uuid": "86ca2d0d-209d-4d46-bad4-2fabbbc35299", 00:20:21.462 "is_configured": false, 00:20:21.462 "data_offset": 0, 00:20:21.462 "data_size": 63488 00:20:21.462 }, 00:20:21.462 { 00:20:21.462 "name": "BaseBdev3", 00:20:21.462 "uuid": "a35fcd5c-29a2-48f8-9a74-673117f827ec", 00:20:21.462 "is_configured": true, 00:20:21.462 "data_offset": 2048, 00:20:21.462 "data_size": 63488 00:20:21.462 } 00:20:21.462 ] 00:20:21.462 }' 00:20:21.462 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:21.462 12:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.033 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.033 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:22.033 12:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.033 12:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.033 12:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.033 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:20:22.033 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:22.033 12:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.033 12:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.033 [2024-12-05 12:52:04.346377] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:22.033 12:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.033 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:22.033 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:22.033 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:22.033 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:22.033 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:22.033 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:22.033 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:22.033 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:22.033 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:22.033 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:22.033 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.033 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:22.033 12:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.033 12:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.033 12:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.033 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:22.033 "name": "Existed_Raid", 00:20:22.033 "uuid": "81e19284-6764-4b43-afea-402dcafbb04a", 00:20:22.033 "strip_size_kb": 64, 00:20:22.033 "state": "configuring", 00:20:22.033 "raid_level": "raid0", 00:20:22.033 "superblock": true, 00:20:22.033 "num_base_bdevs": 3, 00:20:22.033 "num_base_bdevs_discovered": 1, 00:20:22.033 "num_base_bdevs_operational": 3, 00:20:22.033 "base_bdevs_list": [ 00:20:22.033 { 00:20:22.033 "name": null, 00:20:22.033 "uuid": "093eefae-0d8d-42af-b496-835096ff4dcd", 00:20:22.033 "is_configured": false, 00:20:22.033 "data_offset": 0, 00:20:22.033 "data_size": 63488 00:20:22.033 }, 00:20:22.033 { 00:20:22.033 "name": null, 00:20:22.033 "uuid": "86ca2d0d-209d-4d46-bad4-2fabbbc35299", 00:20:22.033 "is_configured": false, 00:20:22.033 "data_offset": 0, 00:20:22.033 "data_size": 63488 00:20:22.033 }, 00:20:22.033 { 00:20:22.033 "name": "BaseBdev3", 00:20:22.033 "uuid": "a35fcd5c-29a2-48f8-9a74-673117f827ec", 00:20:22.033 "is_configured": true, 00:20:22.033 "data_offset": 2048, 00:20:22.033 "data_size": 63488 00:20:22.033 } 00:20:22.033 ] 00:20:22.033 }' 00:20:22.033 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:22.033 12:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.294 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.294 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:22.294 12:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.294 12:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.294 12:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.294 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:20:22.294 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:22.294 12:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.294 12:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.294 [2024-12-05 12:52:04.753128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:22.294 12:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.294 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:20:22.294 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:22.294 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:22.294 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:22.294 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:22.294 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:22.294 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:22.294 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:22.294 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:22.294 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:22.294 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.294 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:22.294 12:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.294 12:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.294 12:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.294 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:22.294 "name": "Existed_Raid", 00:20:22.294 "uuid": "81e19284-6764-4b43-afea-402dcafbb04a", 00:20:22.294 "strip_size_kb": 64, 00:20:22.294 "state": "configuring", 00:20:22.294 "raid_level": "raid0", 00:20:22.294 "superblock": true, 00:20:22.294 "num_base_bdevs": 3, 00:20:22.294 "num_base_bdevs_discovered": 2, 00:20:22.294 "num_base_bdevs_operational": 3, 00:20:22.294 "base_bdevs_list": [ 00:20:22.294 { 00:20:22.294 "name": null, 00:20:22.294 "uuid": "093eefae-0d8d-42af-b496-835096ff4dcd", 00:20:22.294 "is_configured": false, 00:20:22.294 "data_offset": 0, 00:20:22.294 "data_size": 63488 00:20:22.294 }, 00:20:22.294 { 00:20:22.294 "name": "BaseBdev2", 00:20:22.294 "uuid": "86ca2d0d-209d-4d46-bad4-2fabbbc35299", 00:20:22.294 "is_configured": true, 00:20:22.294 "data_offset": 2048, 00:20:22.294 "data_size": 63488 00:20:22.294 }, 00:20:22.294 { 00:20:22.294 "name": "BaseBdev3", 00:20:22.294 "uuid": "a35fcd5c-29a2-48f8-9a74-673117f827ec", 00:20:22.294 "is_configured": true, 00:20:22.294 "data_offset": 2048, 00:20:22.294 "data_size": 63488 00:20:22.294 } 00:20:22.294 ] 00:20:22.294 }' 00:20:22.294 12:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:22.294 12:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.555 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.555 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:22.555 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.555 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.555 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.555 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:22.555 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.555 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.555 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.555 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:22.555 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.816 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 093eefae-0d8d-42af-b496-835096ff4dcd 00:20:22.816 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.816 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.816 [2024-12-05 12:52:05.163582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:22.816 [2024-12-05 12:52:05.163732] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:22.816 [2024-12-05 12:52:05.163744] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:22.816 [2024-12-05 12:52:05.163935] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:22.816 NewBaseBdev 00:20:22.816 [2024-12-05 12:52:05.164037] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:22.816 [2024-12-05 12:52:05.164044] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:20:22.816 [2024-12-05 12:52:05.164139] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:22.816 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.816 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:20:22.817 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:20:22.817 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:22.817 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:22.817 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:22.817 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:22.817 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:22.817 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.817 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.817 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.817 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:22.817 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.817 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.817 [ 00:20:22.817 { 00:20:22.817 "name": "NewBaseBdev", 00:20:22.817 "aliases": [ 00:20:22.817 "093eefae-0d8d-42af-b496-835096ff4dcd" 00:20:22.817 ], 00:20:22.817 "product_name": "Malloc disk", 00:20:22.817 "block_size": 512, 00:20:22.817 "num_blocks": 65536, 00:20:22.817 "uuid": "093eefae-0d8d-42af-b496-835096ff4dcd", 00:20:22.817 "assigned_rate_limits": { 00:20:22.817 "rw_ios_per_sec": 0, 00:20:22.817 "rw_mbytes_per_sec": 0, 00:20:22.817 "r_mbytes_per_sec": 0, 00:20:22.817 "w_mbytes_per_sec": 0 00:20:22.817 }, 00:20:22.817 "claimed": true, 00:20:22.817 "claim_type": "exclusive_write", 00:20:22.817 "zoned": false, 00:20:22.817 "supported_io_types": { 00:20:22.817 "read": true, 00:20:22.817 "write": true, 00:20:22.817 "unmap": true, 00:20:22.817 "flush": true, 00:20:22.817 "reset": true, 00:20:22.817 "nvme_admin": false, 00:20:22.817 "nvme_io": false, 00:20:22.817 "nvme_io_md": false, 00:20:22.817 "write_zeroes": true, 00:20:22.817 "zcopy": true, 00:20:22.817 "get_zone_info": false, 00:20:22.817 "zone_management": false, 00:20:22.817 "zone_append": false, 00:20:22.817 "compare": false, 00:20:22.817 "compare_and_write": false, 00:20:22.817 "abort": true, 00:20:22.817 "seek_hole": false, 00:20:22.817 "seek_data": false, 00:20:22.817 "copy": true, 00:20:22.817 "nvme_iov_md": false 00:20:22.817 }, 00:20:22.817 "memory_domains": [ 00:20:22.817 { 00:20:22.817 "dma_device_id": "system", 00:20:22.817 "dma_device_type": 1 00:20:22.817 }, 00:20:22.817 { 00:20:22.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:22.817 "dma_device_type": 2 00:20:22.817 } 00:20:22.817 ], 00:20:22.817 "driver_specific": {} 00:20:22.817 } 00:20:22.817 ] 00:20:22.817 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.817 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:22.817 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:20:22.817 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:22.817 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:22.817 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:22.817 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:22.817 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:22.817 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:22.817 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:22.817 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:22.817 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:22.817 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.817 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.817 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.817 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:22.817 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.817 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:22.817 "name": "Existed_Raid", 00:20:22.817 "uuid": "81e19284-6764-4b43-afea-402dcafbb04a", 00:20:22.817 "strip_size_kb": 64, 00:20:22.817 "state": "online", 00:20:22.817 "raid_level": "raid0", 00:20:22.817 "superblock": true, 00:20:22.817 "num_base_bdevs": 3, 00:20:22.817 "num_base_bdevs_discovered": 3, 00:20:22.817 "num_base_bdevs_operational": 3, 00:20:22.817 "base_bdevs_list": [ 00:20:22.817 { 00:20:22.817 "name": "NewBaseBdev", 00:20:22.817 "uuid": "093eefae-0d8d-42af-b496-835096ff4dcd", 00:20:22.817 "is_configured": true, 00:20:22.817 "data_offset": 2048, 00:20:22.817 "data_size": 63488 00:20:22.817 }, 00:20:22.817 { 00:20:22.817 "name": "BaseBdev2", 00:20:22.817 "uuid": "86ca2d0d-209d-4d46-bad4-2fabbbc35299", 00:20:22.817 "is_configured": true, 00:20:22.817 "data_offset": 2048, 00:20:22.817 "data_size": 63488 00:20:22.817 }, 00:20:22.817 { 00:20:22.817 "name": "BaseBdev3", 00:20:22.817 "uuid": "a35fcd5c-29a2-48f8-9a74-673117f827ec", 00:20:22.817 "is_configured": true, 00:20:22.817 "data_offset": 2048, 00:20:22.817 "data_size": 63488 00:20:22.817 } 00:20:22.817 ] 00:20:22.817 }' 00:20:22.817 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:22.817 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.076 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:20:23.076 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:23.076 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:23.076 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:23.076 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:20:23.076 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:23.076 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:23.076 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:23.076 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.076 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.076 [2024-12-05 12:52:05.519938] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:23.076 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.076 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:23.076 "name": "Existed_Raid", 00:20:23.076 "aliases": [ 00:20:23.076 "81e19284-6764-4b43-afea-402dcafbb04a" 00:20:23.076 ], 00:20:23.076 "product_name": "Raid Volume", 00:20:23.076 "block_size": 512, 00:20:23.076 "num_blocks": 190464, 00:20:23.076 "uuid": "81e19284-6764-4b43-afea-402dcafbb04a", 00:20:23.076 "assigned_rate_limits": { 00:20:23.076 "rw_ios_per_sec": 0, 00:20:23.076 "rw_mbytes_per_sec": 0, 00:20:23.076 "r_mbytes_per_sec": 0, 00:20:23.076 "w_mbytes_per_sec": 0 00:20:23.076 }, 00:20:23.076 "claimed": false, 00:20:23.076 "zoned": false, 00:20:23.076 "supported_io_types": { 00:20:23.076 "read": true, 00:20:23.076 "write": true, 00:20:23.076 "unmap": true, 00:20:23.076 "flush": true, 00:20:23.076 "reset": true, 00:20:23.076 "nvme_admin": false, 00:20:23.076 "nvme_io": false, 00:20:23.076 "nvme_io_md": false, 00:20:23.076 "write_zeroes": true, 00:20:23.076 "zcopy": false, 00:20:23.076 "get_zone_info": false, 00:20:23.076 "zone_management": false, 00:20:23.076 "zone_append": false, 00:20:23.076 "compare": false, 00:20:23.076 "compare_and_write": false, 00:20:23.076 "abort": false, 00:20:23.076 "seek_hole": false, 00:20:23.076 "seek_data": false, 00:20:23.076 "copy": false, 00:20:23.076 "nvme_iov_md": false 00:20:23.076 }, 00:20:23.076 "memory_domains": [ 00:20:23.076 { 00:20:23.076 "dma_device_id": "system", 00:20:23.076 "dma_device_type": 1 00:20:23.076 }, 00:20:23.076 { 00:20:23.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:23.076 "dma_device_type": 2 00:20:23.076 }, 00:20:23.076 { 00:20:23.076 "dma_device_id": "system", 00:20:23.076 "dma_device_type": 1 00:20:23.076 }, 00:20:23.076 { 00:20:23.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:23.076 "dma_device_type": 2 00:20:23.076 }, 00:20:23.076 { 00:20:23.076 "dma_device_id": "system", 00:20:23.076 "dma_device_type": 1 00:20:23.076 }, 00:20:23.076 { 00:20:23.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:23.076 "dma_device_type": 2 00:20:23.076 } 00:20:23.076 ], 00:20:23.076 "driver_specific": { 00:20:23.076 "raid": { 00:20:23.076 "uuid": "81e19284-6764-4b43-afea-402dcafbb04a", 00:20:23.076 "strip_size_kb": 64, 00:20:23.076 "state": "online", 00:20:23.076 "raid_level": "raid0", 00:20:23.076 "superblock": true, 00:20:23.076 "num_base_bdevs": 3, 00:20:23.076 "num_base_bdevs_discovered": 3, 00:20:23.076 "num_base_bdevs_operational": 3, 00:20:23.076 "base_bdevs_list": [ 00:20:23.076 { 00:20:23.076 "name": "NewBaseBdev", 00:20:23.076 "uuid": "093eefae-0d8d-42af-b496-835096ff4dcd", 00:20:23.076 "is_configured": true, 00:20:23.076 "data_offset": 2048, 00:20:23.076 "data_size": 63488 00:20:23.076 }, 00:20:23.076 { 00:20:23.076 "name": "BaseBdev2", 00:20:23.076 "uuid": "86ca2d0d-209d-4d46-bad4-2fabbbc35299", 00:20:23.076 "is_configured": true, 00:20:23.076 "data_offset": 2048, 00:20:23.076 "data_size": 63488 00:20:23.076 }, 00:20:23.076 { 00:20:23.076 "name": "BaseBdev3", 00:20:23.076 "uuid": "a35fcd5c-29a2-48f8-9a74-673117f827ec", 00:20:23.076 "is_configured": true, 00:20:23.076 "data_offset": 2048, 00:20:23.076 "data_size": 63488 00:20:23.076 } 00:20:23.076 ] 00:20:23.076 } 00:20:23.076 } 00:20:23.076 }' 00:20:23.076 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:23.076 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:20:23.076 BaseBdev2 00:20:23.076 BaseBdev3' 00:20:23.076 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:23.076 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:23.076 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:23.076 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:20:23.076 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.076 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.076 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:23.076 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.076 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:23.076 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:23.076 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:23.076 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:23.076 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:23.076 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.076 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.076 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.336 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:23.336 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:23.336 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:23.336 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:23.336 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:23.336 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.336 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.336 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.336 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:23.336 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:23.336 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:23.336 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.336 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.336 [2024-12-05 12:52:05.703725] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:23.336 [2024-12-05 12:52:05.703750] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:23.336 [2024-12-05 12:52:05.703809] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:23.336 [2024-12-05 12:52:05.703859] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:23.336 [2024-12-05 12:52:05.703868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:20:23.336 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.336 12:52:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62855 00:20:23.336 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62855 ']' 00:20:23.336 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62855 00:20:23.336 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:20:23.336 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:23.336 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62855 00:20:23.336 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:23.337 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:23.337 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62855' 00:20:23.337 killing process with pid 62855 00:20:23.337 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62855 00:20:23.337 [2024-12-05 12:52:05.733758] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:23.337 12:52:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62855 00:20:23.337 [2024-12-05 12:52:05.880996] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:23.905 12:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:20:23.905 00:20:23.905 real 0m7.437s 00:20:23.905 user 0m12.061s 00:20:23.905 sys 0m1.173s 00:20:23.905 12:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:23.905 ************************************ 00:20:23.905 END TEST raid_state_function_test_sb 00:20:23.905 ************************************ 00:20:23.905 12:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.165 12:52:06 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:20:24.165 12:52:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:24.165 12:52:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:24.165 12:52:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:24.165 ************************************ 00:20:24.165 START TEST raid_superblock_test 00:20:24.165 ************************************ 00:20:24.165 12:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:20:24.165 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:20:24.165 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:20:24.165 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:24.165 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:24.165 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:24.165 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:24.165 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:24.165 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:24.165 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:24.165 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:24.165 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:24.165 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:24.165 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:24.165 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:20:24.165 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:20:24.165 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:20:24.165 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63444 00:20:24.165 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63444 00:20:24.165 12:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63444 ']' 00:20:24.165 12:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.165 12:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:24.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.165 12:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.165 12:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:24.165 12:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.165 12:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:24.165 [2024-12-05 12:52:06.568093] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:20:24.165 [2024-12-05 12:52:06.568214] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63444 ] 00:20:24.165 [2024-12-05 12:52:06.726430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.425 [2024-12-05 12:52:06.811108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.425 [2024-12-05 12:52:06.931612] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:24.425 [2024-12-05 12:52:06.931662] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:24.995 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.996 malloc1 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.996 [2024-12-05 12:52:07.509749] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:24.996 [2024-12-05 12:52:07.509798] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.996 [2024-12-05 12:52:07.509815] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:24.996 [2024-12-05 12:52:07.509823] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.996 [2024-12-05 12:52:07.511653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.996 [2024-12-05 12:52:07.511784] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:24.996 pt1 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.996 malloc2 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.996 [2024-12-05 12:52:07.541805] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:24.996 [2024-12-05 12:52:07.541849] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.996 [2024-12-05 12:52:07.541870] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:24.996 [2024-12-05 12:52:07.541877] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.996 [2024-12-05 12:52:07.543655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.996 [2024-12-05 12:52:07.543683] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:24.996 pt2 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.996 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.257 malloc3 00:20:25.257 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.257 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:25.257 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.257 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.257 [2024-12-05 12:52:07.587313] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:25.257 [2024-12-05 12:52:07.587362] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:25.257 [2024-12-05 12:52:07.587382] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:25.258 [2024-12-05 12:52:07.587389] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:25.258 [2024-12-05 12:52:07.589098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:25.258 [2024-12-05 12:52:07.589226] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:25.258 pt3 00:20:25.258 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.258 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:25.258 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:25.258 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:20:25.258 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.258 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.258 [2024-12-05 12:52:07.595367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:25.258 [2024-12-05 12:52:07.596898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:25.258 [2024-12-05 12:52:07.596953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:25.258 [2024-12-05 12:52:07.597079] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:25.258 [2024-12-05 12:52:07.597089] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:25.258 [2024-12-05 12:52:07.597298] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:25.258 [2024-12-05 12:52:07.597414] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:25.258 [2024-12-05 12:52:07.597421] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:25.258 [2024-12-05 12:52:07.597547] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:25.258 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.258 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:20:25.258 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:25.258 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:25.258 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:25.258 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:25.258 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:25.258 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:25.258 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:25.258 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:25.258 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:25.258 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.258 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.258 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.258 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.258 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.258 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:25.258 "name": "raid_bdev1", 00:20:25.258 "uuid": "d22f15f6-1cf1-4f22-9f93-cda460daa6d4", 00:20:25.258 "strip_size_kb": 64, 00:20:25.258 "state": "online", 00:20:25.258 "raid_level": "raid0", 00:20:25.258 "superblock": true, 00:20:25.258 "num_base_bdevs": 3, 00:20:25.258 "num_base_bdevs_discovered": 3, 00:20:25.258 "num_base_bdevs_operational": 3, 00:20:25.258 "base_bdevs_list": [ 00:20:25.258 { 00:20:25.258 "name": "pt1", 00:20:25.258 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:25.258 "is_configured": true, 00:20:25.258 "data_offset": 2048, 00:20:25.258 "data_size": 63488 00:20:25.258 }, 00:20:25.258 { 00:20:25.258 "name": "pt2", 00:20:25.258 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:25.258 "is_configured": true, 00:20:25.258 "data_offset": 2048, 00:20:25.258 "data_size": 63488 00:20:25.258 }, 00:20:25.258 { 00:20:25.258 "name": "pt3", 00:20:25.258 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:25.258 "is_configured": true, 00:20:25.258 "data_offset": 2048, 00:20:25.258 "data_size": 63488 00:20:25.258 } 00:20:25.258 ] 00:20:25.258 }' 00:20:25.258 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:25.258 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.517 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:25.517 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:25.517 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:25.517 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:25.517 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:25.517 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:25.517 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:25.517 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:25.517 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.517 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.517 [2024-12-05 12:52:07.911685] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:25.517 12:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.517 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:25.517 "name": "raid_bdev1", 00:20:25.517 "aliases": [ 00:20:25.517 "d22f15f6-1cf1-4f22-9f93-cda460daa6d4" 00:20:25.517 ], 00:20:25.517 "product_name": "Raid Volume", 00:20:25.517 "block_size": 512, 00:20:25.517 "num_blocks": 190464, 00:20:25.517 "uuid": "d22f15f6-1cf1-4f22-9f93-cda460daa6d4", 00:20:25.517 "assigned_rate_limits": { 00:20:25.517 "rw_ios_per_sec": 0, 00:20:25.517 "rw_mbytes_per_sec": 0, 00:20:25.517 "r_mbytes_per_sec": 0, 00:20:25.517 "w_mbytes_per_sec": 0 00:20:25.517 }, 00:20:25.517 "claimed": false, 00:20:25.517 "zoned": false, 00:20:25.517 "supported_io_types": { 00:20:25.517 "read": true, 00:20:25.517 "write": true, 00:20:25.517 "unmap": true, 00:20:25.517 "flush": true, 00:20:25.517 "reset": true, 00:20:25.517 "nvme_admin": false, 00:20:25.517 "nvme_io": false, 00:20:25.517 "nvme_io_md": false, 00:20:25.517 "write_zeroes": true, 00:20:25.517 "zcopy": false, 00:20:25.517 "get_zone_info": false, 00:20:25.517 "zone_management": false, 00:20:25.517 "zone_append": false, 00:20:25.517 "compare": false, 00:20:25.517 "compare_and_write": false, 00:20:25.517 "abort": false, 00:20:25.517 "seek_hole": false, 00:20:25.517 "seek_data": false, 00:20:25.517 "copy": false, 00:20:25.517 "nvme_iov_md": false 00:20:25.517 }, 00:20:25.517 "memory_domains": [ 00:20:25.517 { 00:20:25.517 "dma_device_id": "system", 00:20:25.517 "dma_device_type": 1 00:20:25.517 }, 00:20:25.517 { 00:20:25.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:25.518 "dma_device_type": 2 00:20:25.518 }, 00:20:25.518 { 00:20:25.518 "dma_device_id": "system", 00:20:25.518 "dma_device_type": 1 00:20:25.518 }, 00:20:25.518 { 00:20:25.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:25.518 "dma_device_type": 2 00:20:25.518 }, 00:20:25.518 { 00:20:25.518 "dma_device_id": "system", 00:20:25.518 "dma_device_type": 1 00:20:25.518 }, 00:20:25.518 { 00:20:25.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:25.518 "dma_device_type": 2 00:20:25.518 } 00:20:25.518 ], 00:20:25.518 "driver_specific": { 00:20:25.518 "raid": { 00:20:25.518 "uuid": "d22f15f6-1cf1-4f22-9f93-cda460daa6d4", 00:20:25.518 "strip_size_kb": 64, 00:20:25.518 "state": "online", 00:20:25.518 "raid_level": "raid0", 00:20:25.518 "superblock": true, 00:20:25.518 "num_base_bdevs": 3, 00:20:25.518 "num_base_bdevs_discovered": 3, 00:20:25.518 "num_base_bdevs_operational": 3, 00:20:25.518 "base_bdevs_list": [ 00:20:25.518 { 00:20:25.518 "name": "pt1", 00:20:25.518 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:25.518 "is_configured": true, 00:20:25.518 "data_offset": 2048, 00:20:25.518 "data_size": 63488 00:20:25.518 }, 00:20:25.518 { 00:20:25.518 "name": "pt2", 00:20:25.518 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:25.518 "is_configured": true, 00:20:25.518 "data_offset": 2048, 00:20:25.518 "data_size": 63488 00:20:25.518 }, 00:20:25.518 { 00:20:25.518 "name": "pt3", 00:20:25.518 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:25.518 "is_configured": true, 00:20:25.518 "data_offset": 2048, 00:20:25.518 "data_size": 63488 00:20:25.518 } 00:20:25.518 ] 00:20:25.518 } 00:20:25.518 } 00:20:25.518 }' 00:20:25.518 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:25.518 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:25.518 pt2 00:20:25.518 pt3' 00:20:25.518 12:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:25.518 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:25.518 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:25.518 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:25.518 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:25.518 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.518 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.518 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.518 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:25.518 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:25.518 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:25.518 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:25.518 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:25.518 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.518 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.518 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.518 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:25.518 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:25.518 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:25.518 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:20:25.518 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:25.518 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.518 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.518 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:25.778 [2024-12-05 12:52:08.107696] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d22f15f6-1cf1-4f22-9f93-cda460daa6d4 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d22f15f6-1cf1-4f22-9f93-cda460daa6d4 ']' 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.778 [2024-12-05 12:52:08.139425] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:25.778 [2024-12-05 12:52:08.139544] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:25.778 [2024-12-05 12:52:08.139615] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:25.778 [2024-12-05 12:52:08.139670] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:25.778 [2024-12-05 12:52:08.139679] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.778 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.778 [2024-12-05 12:52:08.243467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:25.778 [2024-12-05 12:52:08.245089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:25.778 [2024-12-05 12:52:08.245130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:25.778 [2024-12-05 12:52:08.245169] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:25.778 [2024-12-05 12:52:08.245207] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:25.778 [2024-12-05 12:52:08.245223] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:20:25.778 [2024-12-05 12:52:08.245236] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:25.779 [2024-12-05 12:52:08.245247] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:25.779 request: 00:20:25.779 { 00:20:25.779 "name": "raid_bdev1", 00:20:25.779 "raid_level": "raid0", 00:20:25.779 "base_bdevs": [ 00:20:25.779 "malloc1", 00:20:25.779 "malloc2", 00:20:25.779 "malloc3" 00:20:25.779 ], 00:20:25.779 "strip_size_kb": 64, 00:20:25.779 "superblock": false, 00:20:25.779 "method": "bdev_raid_create", 00:20:25.779 "req_id": 1 00:20:25.779 } 00:20:25.779 Got JSON-RPC error response 00:20:25.779 response: 00:20:25.779 { 00:20:25.779 "code": -17, 00:20:25.779 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:25.779 } 00:20:25.779 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:25.779 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:20:25.779 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:25.779 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:25.779 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:25.779 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:25.779 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.779 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.779 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.779 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.779 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:25.779 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:25.779 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:25.779 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.779 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.779 [2024-12-05 12:52:08.287451] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:25.779 [2024-12-05 12:52:08.287614] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:25.779 [2024-12-05 12:52:08.287636] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:25.779 [2024-12-05 12:52:08.287643] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:25.779 [2024-12-05 12:52:08.289394] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:25.779 [2024-12-05 12:52:08.289419] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:25.779 [2024-12-05 12:52:08.289482] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:25.779 [2024-12-05 12:52:08.289537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:25.779 pt1 00:20:25.779 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.779 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:20:25.779 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:25.779 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:25.779 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:25.779 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:25.779 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:25.779 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:25.779 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:25.779 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:25.779 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:25.779 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.779 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.779 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.779 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.779 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.779 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:25.779 "name": "raid_bdev1", 00:20:25.779 "uuid": "d22f15f6-1cf1-4f22-9f93-cda460daa6d4", 00:20:25.779 "strip_size_kb": 64, 00:20:25.779 "state": "configuring", 00:20:25.779 "raid_level": "raid0", 00:20:25.779 "superblock": true, 00:20:25.779 "num_base_bdevs": 3, 00:20:25.779 "num_base_bdevs_discovered": 1, 00:20:25.779 "num_base_bdevs_operational": 3, 00:20:25.779 "base_bdevs_list": [ 00:20:25.779 { 00:20:25.779 "name": "pt1", 00:20:25.779 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:25.779 "is_configured": true, 00:20:25.779 "data_offset": 2048, 00:20:25.779 "data_size": 63488 00:20:25.779 }, 00:20:25.779 { 00:20:25.779 "name": null, 00:20:25.779 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:25.779 "is_configured": false, 00:20:25.779 "data_offset": 2048, 00:20:25.779 "data_size": 63488 00:20:25.779 }, 00:20:25.779 { 00:20:25.779 "name": null, 00:20:25.779 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:25.779 "is_configured": false, 00:20:25.779 "data_offset": 2048, 00:20:25.779 "data_size": 63488 00:20:25.779 } 00:20:25.779 ] 00:20:25.779 }' 00:20:25.779 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:25.779 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.041 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:20:26.041 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:26.041 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.041 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.041 [2024-12-05 12:52:08.607545] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:26.041 [2024-12-05 12:52:08.607661] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:26.041 [2024-12-05 12:52:08.607680] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:20:26.041 [2024-12-05 12:52:08.607687] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:26.041 [2024-12-05 12:52:08.608022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:26.041 [2024-12-05 12:52:08.608035] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:26.041 [2024-12-05 12:52:08.608098] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:26.041 [2024-12-05 12:52:08.608118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:26.041 pt2 00:20:26.041 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.041 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:20:26.041 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.041 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.041 [2024-12-05 12:52:08.615562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:26.041 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.041 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:20:26.041 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:26.041 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:26.041 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:26.041 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:26.041 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:26.041 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:26.041 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:26.041 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:26.041 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:26.303 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.303 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.303 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.303 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.303 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.303 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:26.303 "name": "raid_bdev1", 00:20:26.303 "uuid": "d22f15f6-1cf1-4f22-9f93-cda460daa6d4", 00:20:26.303 "strip_size_kb": 64, 00:20:26.303 "state": "configuring", 00:20:26.303 "raid_level": "raid0", 00:20:26.303 "superblock": true, 00:20:26.303 "num_base_bdevs": 3, 00:20:26.303 "num_base_bdevs_discovered": 1, 00:20:26.303 "num_base_bdevs_operational": 3, 00:20:26.303 "base_bdevs_list": [ 00:20:26.303 { 00:20:26.303 "name": "pt1", 00:20:26.303 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:26.303 "is_configured": true, 00:20:26.303 "data_offset": 2048, 00:20:26.303 "data_size": 63488 00:20:26.303 }, 00:20:26.303 { 00:20:26.303 "name": null, 00:20:26.303 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:26.303 "is_configured": false, 00:20:26.303 "data_offset": 0, 00:20:26.303 "data_size": 63488 00:20:26.303 }, 00:20:26.303 { 00:20:26.303 "name": null, 00:20:26.303 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:26.303 "is_configured": false, 00:20:26.303 "data_offset": 2048, 00:20:26.303 "data_size": 63488 00:20:26.303 } 00:20:26.303 ] 00:20:26.303 }' 00:20:26.303 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:26.303 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.563 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:26.563 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:26.563 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:26.563 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.563 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.563 [2024-12-05 12:52:08.935602] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:26.563 [2024-12-05 12:52:08.935750] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:26.563 [2024-12-05 12:52:08.935783] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:26.563 [2024-12-05 12:52:08.935846] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:26.563 [2024-12-05 12:52:08.936224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:26.563 [2024-12-05 12:52:08.936319] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:26.563 [2024-12-05 12:52:08.936448] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:26.563 [2024-12-05 12:52:08.936533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:26.563 pt2 00:20:26.563 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.563 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:26.563 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:26.563 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:26.563 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.563 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.563 [2024-12-05 12:52:08.943596] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:26.563 [2024-12-05 12:52:08.943632] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:26.563 [2024-12-05 12:52:08.943643] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:26.563 [2024-12-05 12:52:08.943651] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:26.563 [2024-12-05 12:52:08.943944] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:26.563 [2024-12-05 12:52:08.943963] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:26.563 [2024-12-05 12:52:08.944007] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:26.563 [2024-12-05 12:52:08.944027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:26.563 [2024-12-05 12:52:08.944117] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:26.563 [2024-12-05 12:52:08.944129] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:26.563 [2024-12-05 12:52:08.944333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:26.563 [2024-12-05 12:52:08.944449] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:26.563 [2024-12-05 12:52:08.944456] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:26.563 [2024-12-05 12:52:08.944574] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:26.563 pt3 00:20:26.563 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.563 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:26.563 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:26.563 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:20:26.563 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:26.563 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:26.563 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:26.563 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:26.563 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:26.563 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:26.563 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:26.563 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:26.563 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:26.563 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.563 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.563 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.563 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.563 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.563 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:26.563 "name": "raid_bdev1", 00:20:26.563 "uuid": "d22f15f6-1cf1-4f22-9f93-cda460daa6d4", 00:20:26.563 "strip_size_kb": 64, 00:20:26.563 "state": "online", 00:20:26.563 "raid_level": "raid0", 00:20:26.563 "superblock": true, 00:20:26.563 "num_base_bdevs": 3, 00:20:26.563 "num_base_bdevs_discovered": 3, 00:20:26.563 "num_base_bdevs_operational": 3, 00:20:26.563 "base_bdevs_list": [ 00:20:26.563 { 00:20:26.563 "name": "pt1", 00:20:26.563 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:26.563 "is_configured": true, 00:20:26.563 "data_offset": 2048, 00:20:26.563 "data_size": 63488 00:20:26.563 }, 00:20:26.563 { 00:20:26.563 "name": "pt2", 00:20:26.563 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:26.563 "is_configured": true, 00:20:26.563 "data_offset": 2048, 00:20:26.563 "data_size": 63488 00:20:26.563 }, 00:20:26.563 { 00:20:26.563 "name": "pt3", 00:20:26.563 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:26.563 "is_configured": true, 00:20:26.563 "data_offset": 2048, 00:20:26.563 "data_size": 63488 00:20:26.563 } 00:20:26.563 ] 00:20:26.563 }' 00:20:26.563 12:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:26.563 12:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.838 12:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:26.838 12:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:26.838 12:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:26.838 12:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:26.838 12:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:26.838 12:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:26.838 12:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:26.838 12:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.838 12:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.838 12:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:26.838 [2024-12-05 12:52:09.263942] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:26.838 12:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.838 12:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:26.838 "name": "raid_bdev1", 00:20:26.838 "aliases": [ 00:20:26.838 "d22f15f6-1cf1-4f22-9f93-cda460daa6d4" 00:20:26.838 ], 00:20:26.838 "product_name": "Raid Volume", 00:20:26.838 "block_size": 512, 00:20:26.838 "num_blocks": 190464, 00:20:26.838 "uuid": "d22f15f6-1cf1-4f22-9f93-cda460daa6d4", 00:20:26.838 "assigned_rate_limits": { 00:20:26.838 "rw_ios_per_sec": 0, 00:20:26.838 "rw_mbytes_per_sec": 0, 00:20:26.838 "r_mbytes_per_sec": 0, 00:20:26.838 "w_mbytes_per_sec": 0 00:20:26.838 }, 00:20:26.838 "claimed": false, 00:20:26.838 "zoned": false, 00:20:26.838 "supported_io_types": { 00:20:26.838 "read": true, 00:20:26.838 "write": true, 00:20:26.838 "unmap": true, 00:20:26.838 "flush": true, 00:20:26.838 "reset": true, 00:20:26.838 "nvme_admin": false, 00:20:26.842 "nvme_io": false, 00:20:26.842 "nvme_io_md": false, 00:20:26.842 "write_zeroes": true, 00:20:26.842 "zcopy": false, 00:20:26.842 "get_zone_info": false, 00:20:26.842 "zone_management": false, 00:20:26.842 "zone_append": false, 00:20:26.842 "compare": false, 00:20:26.842 "compare_and_write": false, 00:20:26.842 "abort": false, 00:20:26.842 "seek_hole": false, 00:20:26.842 "seek_data": false, 00:20:26.842 "copy": false, 00:20:26.842 "nvme_iov_md": false 00:20:26.842 }, 00:20:26.842 "memory_domains": [ 00:20:26.842 { 00:20:26.842 "dma_device_id": "system", 00:20:26.842 "dma_device_type": 1 00:20:26.842 }, 00:20:26.842 { 00:20:26.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.842 "dma_device_type": 2 00:20:26.842 }, 00:20:26.842 { 00:20:26.843 "dma_device_id": "system", 00:20:26.843 "dma_device_type": 1 00:20:26.843 }, 00:20:26.843 { 00:20:26.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.843 "dma_device_type": 2 00:20:26.843 }, 00:20:26.843 { 00:20:26.843 "dma_device_id": "system", 00:20:26.843 "dma_device_type": 1 00:20:26.843 }, 00:20:26.843 { 00:20:26.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.843 "dma_device_type": 2 00:20:26.843 } 00:20:26.843 ], 00:20:26.843 "driver_specific": { 00:20:26.843 "raid": { 00:20:26.843 "uuid": "d22f15f6-1cf1-4f22-9f93-cda460daa6d4", 00:20:26.843 "strip_size_kb": 64, 00:20:26.843 "state": "online", 00:20:26.843 "raid_level": "raid0", 00:20:26.843 "superblock": true, 00:20:26.843 "num_base_bdevs": 3, 00:20:26.843 "num_base_bdevs_discovered": 3, 00:20:26.843 "num_base_bdevs_operational": 3, 00:20:26.843 "base_bdevs_list": [ 00:20:26.843 { 00:20:26.843 "name": "pt1", 00:20:26.843 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:26.843 "is_configured": true, 00:20:26.843 "data_offset": 2048, 00:20:26.843 "data_size": 63488 00:20:26.843 }, 00:20:26.843 { 00:20:26.843 "name": "pt2", 00:20:26.843 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:26.843 "is_configured": true, 00:20:26.843 "data_offset": 2048, 00:20:26.843 "data_size": 63488 00:20:26.843 }, 00:20:26.843 { 00:20:26.843 "name": "pt3", 00:20:26.843 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:26.843 "is_configured": true, 00:20:26.843 "data_offset": 2048, 00:20:26.843 "data_size": 63488 00:20:26.843 } 00:20:26.843 ] 00:20:26.843 } 00:20:26.843 } 00:20:26.843 }' 00:20:26.843 12:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:26.843 12:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:26.843 pt2 00:20:26.844 pt3' 00:20:26.844 12:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:26.844 12:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:26.844 12:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:26.844 12:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:26.844 12:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.844 12:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.844 12:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:26.844 12:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.844 12:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:26.844 12:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:26.844 12:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:26.844 12:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:26.844 12:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:26.846 12:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.846 12:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.846 12:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.846 12:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:26.846 12:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:26.846 12:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:26.846 12:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:20:26.846 12:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:26.846 12:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.846 12:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.106 12:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.106 12:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:27.106 12:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:27.106 12:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:27.106 12:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.106 12:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.106 12:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:27.106 [2024-12-05 12:52:09.447907] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:27.106 12:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.106 12:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d22f15f6-1cf1-4f22-9f93-cda460daa6d4 '!=' d22f15f6-1cf1-4f22-9f93-cda460daa6d4 ']' 00:20:27.106 12:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:20:27.106 12:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:27.106 12:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:20:27.106 12:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63444 00:20:27.106 12:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63444 ']' 00:20:27.106 12:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63444 00:20:27.107 12:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:20:27.107 12:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:27.107 12:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63444 00:20:27.107 killing process with pid 63444 00:20:27.107 12:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:27.107 12:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:27.107 12:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63444' 00:20:27.107 12:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63444 00:20:27.107 [2024-12-05 12:52:09.499773] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:27.107 [2024-12-05 12:52:09.499845] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:27.107 12:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63444 00:20:27.107 [2024-12-05 12:52:09.499894] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:27.107 [2024-12-05 12:52:09.499904] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:27.107 [2024-12-05 12:52:09.645321] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:27.676 12:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:20:27.676 00:20:27.676 real 0m3.725s 00:20:27.676 user 0m5.453s 00:20:27.676 sys 0m0.603s 00:20:27.676 12:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:27.676 ************************************ 00:20:27.676 END TEST raid_superblock_test 00:20:27.676 ************************************ 00:20:27.676 12:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.676 12:52:10 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:20:27.676 12:52:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:27.676 12:52:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:27.676 12:52:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:27.937 ************************************ 00:20:27.937 START TEST raid_read_error_test 00:20:27.937 ************************************ 00:20:27.937 12:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:20:27.937 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:20:27.937 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:20:27.937 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:20:27.937 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:20:27.937 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:27.937 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:20:27.937 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:27.937 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:27.937 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:20:27.937 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:27.937 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:27.937 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:20:27.937 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:27.937 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:27.937 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:27.937 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:20:27.937 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:20:27.937 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:20:27.937 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:20:27.937 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:20:27.937 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:20:27.937 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:20:27.937 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:20:27.937 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:20:27.937 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:20:27.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:27.937 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ZIUb7Or9BQ 00:20:27.937 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63686 00:20:27.937 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63686 00:20:27.938 12:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63686 ']' 00:20:27.938 12:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.938 12:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:27.938 12:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.938 12:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:27.938 12:52:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.938 12:52:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:27.938 [2024-12-05 12:52:10.338408] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:20:27.938 [2024-12-05 12:52:10.338545] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63686 ] 00:20:27.938 [2024-12-05 12:52:10.492244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.196 [2024-12-05 12:52:10.577435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.196 [2024-12-05 12:52:10.687449] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:28.196 [2024-12-05 12:52:10.687483] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.764 BaseBdev1_malloc 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.764 true 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.764 [2024-12-05 12:52:11.170846] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:28.764 [2024-12-05 12:52:11.170900] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:28.764 [2024-12-05 12:52:11.170919] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:20:28.764 [2024-12-05 12:52:11.170929] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:28.764 [2024-12-05 12:52:11.172772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:28.764 [2024-12-05 12:52:11.172955] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:28.764 BaseBdev1 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.764 BaseBdev2_malloc 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.764 true 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.764 [2024-12-05 12:52:11.210717] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:28.764 [2024-12-05 12:52:11.210769] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:28.764 [2024-12-05 12:52:11.210785] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:20:28.764 [2024-12-05 12:52:11.210794] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:28.764 [2024-12-05 12:52:11.212657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:28.764 [2024-12-05 12:52:11.212690] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:28.764 BaseBdev2 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.764 BaseBdev3_malloc 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.764 true 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.764 [2024-12-05 12:52:11.265632] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:28.764 [2024-12-05 12:52:11.265688] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:28.764 [2024-12-05 12:52:11.265706] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:28.764 [2024-12-05 12:52:11.265716] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:28.764 [2024-12-05 12:52:11.267550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:28.764 [2024-12-05 12:52:11.267582] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:28.764 BaseBdev3 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.764 12:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.764 [2024-12-05 12:52:11.273700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:28.764 [2024-12-05 12:52:11.275263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:28.764 [2024-12-05 12:52:11.275431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:28.764 [2024-12-05 12:52:11.275637] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:28.764 [2024-12-05 12:52:11.275650] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:28.764 [2024-12-05 12:52:11.275886] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:20:28.764 [2024-12-05 12:52:11.276012] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:28.765 [2024-12-05 12:52:11.276022] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:28.765 [2024-12-05 12:52:11.276149] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:28.765 12:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.765 12:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:20:28.765 12:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:28.765 12:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:28.765 12:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:28.765 12:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:28.765 12:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:28.765 12:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:28.765 12:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:28.765 12:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:28.765 12:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:28.765 12:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.765 12:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.765 12:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.765 12:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.765 12:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.765 12:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:28.765 "name": "raid_bdev1", 00:20:28.765 "uuid": "d9a268ea-91eb-4095-8895-4544b8db0409", 00:20:28.765 "strip_size_kb": 64, 00:20:28.765 "state": "online", 00:20:28.765 "raid_level": "raid0", 00:20:28.765 "superblock": true, 00:20:28.765 "num_base_bdevs": 3, 00:20:28.765 "num_base_bdevs_discovered": 3, 00:20:28.765 "num_base_bdevs_operational": 3, 00:20:28.765 "base_bdevs_list": [ 00:20:28.765 { 00:20:28.765 "name": "BaseBdev1", 00:20:28.765 "uuid": "e557b13c-f9f9-5b71-8077-4ae95fc23bdb", 00:20:28.765 "is_configured": true, 00:20:28.765 "data_offset": 2048, 00:20:28.765 "data_size": 63488 00:20:28.765 }, 00:20:28.765 { 00:20:28.765 "name": "BaseBdev2", 00:20:28.765 "uuid": "278a8418-7364-5da2-886c-10cda6e52661", 00:20:28.765 "is_configured": true, 00:20:28.765 "data_offset": 2048, 00:20:28.765 "data_size": 63488 00:20:28.765 }, 00:20:28.765 { 00:20:28.765 "name": "BaseBdev3", 00:20:28.765 "uuid": "464fb401-288d-59e7-ad7f-ca01ee6a2f0d", 00:20:28.765 "is_configured": true, 00:20:28.765 "data_offset": 2048, 00:20:28.765 "data_size": 63488 00:20:28.765 } 00:20:28.765 ] 00:20:28.765 }' 00:20:28.765 12:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:28.765 12:52:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.025 12:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:20:29.025 12:52:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:20:29.286 [2024-12-05 12:52:11.666546] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:20:30.273 12:52:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:20:30.273 12:52:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.273 12:52:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.273 12:52:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.273 12:52:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:20:30.273 12:52:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:20:30.273 12:52:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:20:30.273 12:52:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:20:30.273 12:52:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:30.273 12:52:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:30.273 12:52:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:30.273 12:52:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:30.273 12:52:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:30.273 12:52:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:30.273 12:52:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:30.273 12:52:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:30.273 12:52:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:30.273 12:52:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.273 12:52:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.273 12:52:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.273 12:52:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.273 12:52:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.273 12:52:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.273 "name": "raid_bdev1", 00:20:30.274 "uuid": "d9a268ea-91eb-4095-8895-4544b8db0409", 00:20:30.274 "strip_size_kb": 64, 00:20:30.274 "state": "online", 00:20:30.274 "raid_level": "raid0", 00:20:30.274 "superblock": true, 00:20:30.274 "num_base_bdevs": 3, 00:20:30.274 "num_base_bdevs_discovered": 3, 00:20:30.274 "num_base_bdevs_operational": 3, 00:20:30.274 "base_bdevs_list": [ 00:20:30.274 { 00:20:30.274 "name": "BaseBdev1", 00:20:30.274 "uuid": "e557b13c-f9f9-5b71-8077-4ae95fc23bdb", 00:20:30.274 "is_configured": true, 00:20:30.274 "data_offset": 2048, 00:20:30.274 "data_size": 63488 00:20:30.274 }, 00:20:30.274 { 00:20:30.274 "name": "BaseBdev2", 00:20:30.274 "uuid": "278a8418-7364-5da2-886c-10cda6e52661", 00:20:30.274 "is_configured": true, 00:20:30.274 "data_offset": 2048, 00:20:30.274 "data_size": 63488 00:20:30.274 }, 00:20:30.274 { 00:20:30.274 "name": "BaseBdev3", 00:20:30.274 "uuid": "464fb401-288d-59e7-ad7f-ca01ee6a2f0d", 00:20:30.274 "is_configured": true, 00:20:30.274 "data_offset": 2048, 00:20:30.274 "data_size": 63488 00:20:30.274 } 00:20:30.274 ] 00:20:30.274 }' 00:20:30.274 12:52:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.274 12:52:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.534 12:52:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:30.534 12:52:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.534 12:52:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.534 [2024-12-05 12:52:12.915560] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:30.534 [2024-12-05 12:52:12.915586] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:30.534 [2024-12-05 12:52:12.917999] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:30.534 [2024-12-05 12:52:12.918137] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:30.534 [2024-12-05 12:52:12.918176] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:30.534 [2024-12-05 12:52:12.918183] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:30.534 { 00:20:30.534 "results": [ 00:20:30.534 { 00:20:30.534 "job": "raid_bdev1", 00:20:30.534 "core_mask": "0x1", 00:20:30.534 "workload": "randrw", 00:20:30.534 "percentage": 50, 00:20:30.534 "status": "finished", 00:20:30.534 "queue_depth": 1, 00:20:30.534 "io_size": 131072, 00:20:30.534 "runtime": 1.24749, 00:20:30.534 "iops": 17110.357598056897, 00:20:30.534 "mibps": 2138.794699757112, 00:20:30.534 "io_failed": 1, 00:20:30.534 "io_timeout": 0, 00:20:30.534 "avg_latency_us": 80.02237133240601, 00:20:30.534 "min_latency_us": 26.978461538461538, 00:20:30.534 "max_latency_us": 1342.2276923076922 00:20:30.534 } 00:20:30.534 ], 00:20:30.534 "core_count": 1 00:20:30.534 } 00:20:30.534 12:52:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.534 12:52:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63686 00:20:30.534 12:52:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63686 ']' 00:20:30.534 12:52:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63686 00:20:30.534 12:52:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:20:30.534 12:52:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:30.534 12:52:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63686 00:20:30.534 killing process with pid 63686 00:20:30.534 12:52:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:30.534 12:52:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:30.534 12:52:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63686' 00:20:30.534 12:52:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63686 00:20:30.534 [2024-12-05 12:52:12.953002] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:30.534 12:52:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63686 00:20:30.534 [2024-12-05 12:52:13.068296] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:31.104 12:52:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:20:31.104 12:52:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:20:31.104 12:52:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ZIUb7Or9BQ 00:20:31.104 12:52:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.80 00:20:31.104 12:52:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:20:31.104 ************************************ 00:20:31.104 END TEST raid_read_error_test 00:20:31.104 12:52:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:31.104 12:52:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:20:31.104 12:52:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.80 != \0\.\0\0 ]] 00:20:31.104 00:20:31.104 real 0m3.408s 00:20:31.104 user 0m4.051s 00:20:31.104 sys 0m0.370s 00:20:31.104 12:52:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:31.104 12:52:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.104 ************************************ 00:20:31.365 12:52:13 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:20:31.365 12:52:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:31.365 12:52:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:31.365 12:52:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:31.365 ************************************ 00:20:31.365 START TEST raid_write_error_test 00:20:31.365 ************************************ 00:20:31.365 12:52:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:20:31.365 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:20:31.365 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:20:31.365 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:20:31.365 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:20:31.365 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:31.365 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:20:31.365 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:31.365 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:31.365 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:20:31.365 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:31.365 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:31.365 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:20:31.365 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:31.365 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:31.365 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:31.365 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:20:31.365 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:20:31.365 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:20:31.365 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:20:31.365 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:20:31.365 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:20:31.365 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:20:31.365 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:20:31.365 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:20:31.365 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:20:31.365 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.HJIKTTJNFy 00:20:31.365 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63815 00:20:31.365 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63815 00:20:31.365 12:52:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63815 ']' 00:20:31.365 12:52:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:31.365 12:52:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:31.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:31.365 12:52:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:31.365 12:52:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:31.365 12:52:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.365 12:52:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:31.365 [2024-12-05 12:52:13.782372] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:20:31.365 [2024-12-05 12:52:13.782513] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63815 ] 00:20:31.365 [2024-12-05 12:52:13.937755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.626 [2024-12-05 12:52:14.025460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.626 [2024-12-05 12:52:14.136023] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:31.626 [2024-12-05 12:52:14.136054] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.195 BaseBdev1_malloc 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.195 true 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.195 [2024-12-05 12:52:14.659199] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:32.195 [2024-12-05 12:52:14.659248] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:32.195 [2024-12-05 12:52:14.659264] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:20:32.195 [2024-12-05 12:52:14.659272] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:32.195 [2024-12-05 12:52:14.661020] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:32.195 [2024-12-05 12:52:14.661154] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:32.195 BaseBdev1 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.195 BaseBdev2_malloc 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.195 true 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.195 [2024-12-05 12:52:14.698659] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:32.195 [2024-12-05 12:52:14.698703] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:32.195 [2024-12-05 12:52:14.698716] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:20:32.195 [2024-12-05 12:52:14.698725] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:32.195 [2024-12-05 12:52:14.700497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:32.195 [2024-12-05 12:52:14.700527] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:32.195 BaseBdev2 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.195 BaseBdev3_malloc 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.195 true 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.195 [2024-12-05 12:52:14.751243] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:32.195 [2024-12-05 12:52:14.751290] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:32.195 [2024-12-05 12:52:14.751307] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:32.195 [2024-12-05 12:52:14.751317] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:32.195 [2024-12-05 12:52:14.753385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:32.195 [2024-12-05 12:52:14.753429] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:32.195 BaseBdev3 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.195 [2024-12-05 12:52:14.759321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:32.195 [2024-12-05 12:52:14.760897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:32.195 [2024-12-05 12:52:14.760962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:32.195 [2024-12-05 12:52:14.761127] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:32.195 [2024-12-05 12:52:14.761137] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:32.195 [2024-12-05 12:52:14.761353] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:20:32.195 [2024-12-05 12:52:14.761477] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:32.195 [2024-12-05 12:52:14.761505] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:32.195 [2024-12-05 12:52:14.761624] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.195 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.455 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.455 12:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:32.455 "name": "raid_bdev1", 00:20:32.455 "uuid": "3fbad744-7165-4883-b114-e35831ee5760", 00:20:32.455 "strip_size_kb": 64, 00:20:32.455 "state": "online", 00:20:32.455 "raid_level": "raid0", 00:20:32.455 "superblock": true, 00:20:32.455 "num_base_bdevs": 3, 00:20:32.455 "num_base_bdevs_discovered": 3, 00:20:32.455 "num_base_bdevs_operational": 3, 00:20:32.455 "base_bdevs_list": [ 00:20:32.455 { 00:20:32.455 "name": "BaseBdev1", 00:20:32.455 "uuid": "94c2fb2a-e9f1-58b8-afcc-fa8a3253c361", 00:20:32.455 "is_configured": true, 00:20:32.455 "data_offset": 2048, 00:20:32.455 "data_size": 63488 00:20:32.455 }, 00:20:32.455 { 00:20:32.455 "name": "BaseBdev2", 00:20:32.455 "uuid": "9be9b013-0a98-5344-8e78-9a3b1b566d67", 00:20:32.455 "is_configured": true, 00:20:32.455 "data_offset": 2048, 00:20:32.455 "data_size": 63488 00:20:32.455 }, 00:20:32.455 { 00:20:32.455 "name": "BaseBdev3", 00:20:32.455 "uuid": "58056b62-dc93-5f96-b242-67bc3a2bedd6", 00:20:32.455 "is_configured": true, 00:20:32.455 "data_offset": 2048, 00:20:32.455 "data_size": 63488 00:20:32.455 } 00:20:32.455 ] 00:20:32.455 }' 00:20:32.455 12:52:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:32.455 12:52:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.716 12:52:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:20:32.716 12:52:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:20:32.716 [2024-12-05 12:52:15.156164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:20:33.651 12:52:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:20:33.651 12:52:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.651 12:52:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.651 12:52:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.651 12:52:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:20:33.651 12:52:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:20:33.651 12:52:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:20:33.651 12:52:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:20:33.651 12:52:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:33.651 12:52:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:33.651 12:52:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:33.651 12:52:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:33.651 12:52:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:33.651 12:52:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:33.651 12:52:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:33.651 12:52:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:33.651 12:52:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:33.651 12:52:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.651 12:52:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.651 12:52:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.651 12:52:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.651 12:52:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.651 12:52:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:33.651 "name": "raid_bdev1", 00:20:33.651 "uuid": "3fbad744-7165-4883-b114-e35831ee5760", 00:20:33.651 "strip_size_kb": 64, 00:20:33.651 "state": "online", 00:20:33.651 "raid_level": "raid0", 00:20:33.651 "superblock": true, 00:20:33.651 "num_base_bdevs": 3, 00:20:33.651 "num_base_bdevs_discovered": 3, 00:20:33.651 "num_base_bdevs_operational": 3, 00:20:33.651 "base_bdevs_list": [ 00:20:33.651 { 00:20:33.651 "name": "BaseBdev1", 00:20:33.651 "uuid": "94c2fb2a-e9f1-58b8-afcc-fa8a3253c361", 00:20:33.651 "is_configured": true, 00:20:33.651 "data_offset": 2048, 00:20:33.651 "data_size": 63488 00:20:33.651 }, 00:20:33.651 { 00:20:33.651 "name": "BaseBdev2", 00:20:33.651 "uuid": "9be9b013-0a98-5344-8e78-9a3b1b566d67", 00:20:33.651 "is_configured": true, 00:20:33.651 "data_offset": 2048, 00:20:33.651 "data_size": 63488 00:20:33.651 }, 00:20:33.651 { 00:20:33.651 "name": "BaseBdev3", 00:20:33.651 "uuid": "58056b62-dc93-5f96-b242-67bc3a2bedd6", 00:20:33.651 "is_configured": true, 00:20:33.651 "data_offset": 2048, 00:20:33.651 "data_size": 63488 00:20:33.651 } 00:20:33.651 ] 00:20:33.651 }' 00:20:33.651 12:52:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:33.651 12:52:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.910 12:52:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:33.910 12:52:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.910 12:52:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.910 [2024-12-05 12:52:16.408977] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:33.910 [2024-12-05 12:52:16.409000] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:33.910 [2024-12-05 12:52:16.411502] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:33.910 [2024-12-05 12:52:16.411546] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:33.910 [2024-12-05 12:52:16.411577] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:33.910 [2024-12-05 12:52:16.411584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:33.910 { 00:20:33.910 "results": [ 00:20:33.910 { 00:20:33.910 "job": "raid_bdev1", 00:20:33.910 "core_mask": "0x1", 00:20:33.910 "workload": "randrw", 00:20:33.910 "percentage": 50, 00:20:33.910 "status": "finished", 00:20:33.910 "queue_depth": 1, 00:20:33.910 "io_size": 131072, 00:20:33.910 "runtime": 1.251298, 00:20:33.910 "iops": 17709.610340622297, 00:20:33.910 "mibps": 2213.701292577787, 00:20:33.910 "io_failed": 1, 00:20:33.910 "io_timeout": 0, 00:20:33.910 "avg_latency_us": 77.12627241897582, 00:20:33.910 "min_latency_us": 25.993846153846153, 00:20:33.910 "max_latency_us": 1310.72 00:20:33.910 } 00:20:33.910 ], 00:20:33.910 "core_count": 1 00:20:33.910 } 00:20:33.910 12:52:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.910 12:52:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63815 00:20:33.910 12:52:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63815 ']' 00:20:33.910 12:52:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63815 00:20:33.910 12:52:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:20:33.910 12:52:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:33.910 12:52:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63815 00:20:33.910 killing process with pid 63815 00:20:33.910 12:52:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:33.910 12:52:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:33.910 12:52:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63815' 00:20:33.910 12:52:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63815 00:20:33.910 [2024-12-05 12:52:16.442827] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:33.910 12:52:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63815 00:20:34.168 [2024-12-05 12:52:16.554517] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:34.735 12:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.HJIKTTJNFy 00:20:34.735 12:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:20:34.735 12:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:20:34.735 12:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.80 00:20:34.736 12:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:20:34.736 12:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:34.736 12:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:20:34.736 ************************************ 00:20:34.736 END TEST raid_write_error_test 00:20:34.736 ************************************ 00:20:34.736 12:52:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.80 != \0\.\0\0 ]] 00:20:34.736 00:20:34.736 real 0m3.460s 00:20:34.736 user 0m4.142s 00:20:34.736 sys 0m0.381s 00:20:34.736 12:52:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:34.736 12:52:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.736 12:52:17 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:20:34.736 12:52:17 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:20:34.736 12:52:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:34.736 12:52:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:34.736 12:52:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:34.736 ************************************ 00:20:34.736 START TEST raid_state_function_test 00:20:34.736 ************************************ 00:20:34.736 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:20:34.736 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:20:34.736 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:20:34.736 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:20:34.736 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:34.736 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:34.736 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:34.736 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:34.736 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:34.736 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:34.736 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:34.736 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:34.736 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:34.736 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:20:34.736 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:34.736 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:34.736 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:34.736 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:34.736 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:34.736 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:34.736 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:34.736 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:34.736 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:20:34.736 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:20:34.736 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:20:34.736 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:20:34.736 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:20:34.736 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63947 00:20:34.736 Process raid pid: 63947 00:20:34.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.736 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63947' 00:20:34.736 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63947 00:20:34.736 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63947 ']' 00:20:34.736 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.736 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:34.736 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.736 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:34.736 12:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.736 12:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:34.736 [2024-12-05 12:52:17.278000] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:20:34.736 [2024-12-05 12:52:17.278255] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.994 [2024-12-05 12:52:17.436875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.994 [2024-12-05 12:52:17.537864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.253 [2024-12-05 12:52:17.673136] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:35.253 [2024-12-05 12:52:17.673164] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:35.512 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:35.512 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:20:35.512 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:35.512 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.512 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.512 [2024-12-05 12:52:18.087269] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:35.512 [2024-12-05 12:52:18.087319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:35.512 [2024-12-05 12:52:18.087329] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:35.512 [2024-12-05 12:52:18.087339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:35.512 [2024-12-05 12:52:18.087345] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:35.512 [2024-12-05 12:52:18.087354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:35.512 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.512 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:35.512 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:35.512 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:35.512 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:35.512 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:35.512 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:35.512 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:35.512 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:35.512 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:35.512 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:35.512 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.512 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.512 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.771 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:35.771 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.771 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:35.771 "name": "Existed_Raid", 00:20:35.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.771 "strip_size_kb": 64, 00:20:35.771 "state": "configuring", 00:20:35.771 "raid_level": "concat", 00:20:35.771 "superblock": false, 00:20:35.771 "num_base_bdevs": 3, 00:20:35.771 "num_base_bdevs_discovered": 0, 00:20:35.771 "num_base_bdevs_operational": 3, 00:20:35.771 "base_bdevs_list": [ 00:20:35.771 { 00:20:35.771 "name": "BaseBdev1", 00:20:35.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.771 "is_configured": false, 00:20:35.771 "data_offset": 0, 00:20:35.771 "data_size": 0 00:20:35.771 }, 00:20:35.771 { 00:20:35.771 "name": "BaseBdev2", 00:20:35.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.771 "is_configured": false, 00:20:35.771 "data_offset": 0, 00:20:35.771 "data_size": 0 00:20:35.771 }, 00:20:35.771 { 00:20:35.771 "name": "BaseBdev3", 00:20:35.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.771 "is_configured": false, 00:20:35.771 "data_offset": 0, 00:20:35.771 "data_size": 0 00:20:35.771 } 00:20:35.771 ] 00:20:35.771 }' 00:20:35.771 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:35.771 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.031 [2024-12-05 12:52:18.399303] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:36.031 [2024-12-05 12:52:18.399335] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.031 [2024-12-05 12:52:18.407310] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:36.031 [2024-12-05 12:52:18.407350] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:36.031 [2024-12-05 12:52:18.407359] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:36.031 [2024-12-05 12:52:18.407368] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:36.031 [2024-12-05 12:52:18.407374] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:36.031 [2024-12-05 12:52:18.407383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.031 [2024-12-05 12:52:18.439958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:36.031 BaseBdev1 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.031 [ 00:20:36.031 { 00:20:36.031 "name": "BaseBdev1", 00:20:36.031 "aliases": [ 00:20:36.031 "18077d35-9321-4c20-a979-966429c3d0e7" 00:20:36.031 ], 00:20:36.031 "product_name": "Malloc disk", 00:20:36.031 "block_size": 512, 00:20:36.031 "num_blocks": 65536, 00:20:36.031 "uuid": "18077d35-9321-4c20-a979-966429c3d0e7", 00:20:36.031 "assigned_rate_limits": { 00:20:36.031 "rw_ios_per_sec": 0, 00:20:36.031 "rw_mbytes_per_sec": 0, 00:20:36.031 "r_mbytes_per_sec": 0, 00:20:36.031 "w_mbytes_per_sec": 0 00:20:36.031 }, 00:20:36.031 "claimed": true, 00:20:36.031 "claim_type": "exclusive_write", 00:20:36.031 "zoned": false, 00:20:36.031 "supported_io_types": { 00:20:36.031 "read": true, 00:20:36.031 "write": true, 00:20:36.031 "unmap": true, 00:20:36.031 "flush": true, 00:20:36.031 "reset": true, 00:20:36.031 "nvme_admin": false, 00:20:36.031 "nvme_io": false, 00:20:36.031 "nvme_io_md": false, 00:20:36.031 "write_zeroes": true, 00:20:36.031 "zcopy": true, 00:20:36.031 "get_zone_info": false, 00:20:36.031 "zone_management": false, 00:20:36.031 "zone_append": false, 00:20:36.031 "compare": false, 00:20:36.031 "compare_and_write": false, 00:20:36.031 "abort": true, 00:20:36.031 "seek_hole": false, 00:20:36.031 "seek_data": false, 00:20:36.031 "copy": true, 00:20:36.031 "nvme_iov_md": false 00:20:36.031 }, 00:20:36.031 "memory_domains": [ 00:20:36.031 { 00:20:36.031 "dma_device_id": "system", 00:20:36.031 "dma_device_type": 1 00:20:36.031 }, 00:20:36.031 { 00:20:36.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.031 "dma_device_type": 2 00:20:36.031 } 00:20:36.031 ], 00:20:36.031 "driver_specific": {} 00:20:36.031 } 00:20:36.031 ] 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.031 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:36.032 "name": "Existed_Raid", 00:20:36.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.032 "strip_size_kb": 64, 00:20:36.032 "state": "configuring", 00:20:36.032 "raid_level": "concat", 00:20:36.032 "superblock": false, 00:20:36.032 "num_base_bdevs": 3, 00:20:36.032 "num_base_bdevs_discovered": 1, 00:20:36.032 "num_base_bdevs_operational": 3, 00:20:36.032 "base_bdevs_list": [ 00:20:36.032 { 00:20:36.032 "name": "BaseBdev1", 00:20:36.032 "uuid": "18077d35-9321-4c20-a979-966429c3d0e7", 00:20:36.032 "is_configured": true, 00:20:36.032 "data_offset": 0, 00:20:36.032 "data_size": 65536 00:20:36.032 }, 00:20:36.032 { 00:20:36.032 "name": "BaseBdev2", 00:20:36.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.032 "is_configured": false, 00:20:36.032 "data_offset": 0, 00:20:36.032 "data_size": 0 00:20:36.032 }, 00:20:36.032 { 00:20:36.032 "name": "BaseBdev3", 00:20:36.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.032 "is_configured": false, 00:20:36.032 "data_offset": 0, 00:20:36.032 "data_size": 0 00:20:36.032 } 00:20:36.032 ] 00:20:36.032 }' 00:20:36.032 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:36.032 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.291 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:36.291 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.291 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.291 [2024-12-05 12:52:18.780102] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:36.291 [2024-12-05 12:52:18.780150] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:36.291 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.291 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:36.291 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.291 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.291 [2024-12-05 12:52:18.788146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:36.291 [2024-12-05 12:52:18.790539] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:36.291 [2024-12-05 12:52:18.790591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:36.292 [2024-12-05 12:52:18.790605] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:36.292 [2024-12-05 12:52:18.790620] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:36.292 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.292 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:36.292 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:36.292 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:36.292 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:36.292 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:36.292 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:36.292 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:36.292 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:36.292 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:36.292 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:36.292 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:36.292 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:36.292 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.292 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.292 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.292 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:36.292 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.292 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:36.292 "name": "Existed_Raid", 00:20:36.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.292 "strip_size_kb": 64, 00:20:36.292 "state": "configuring", 00:20:36.292 "raid_level": "concat", 00:20:36.292 "superblock": false, 00:20:36.292 "num_base_bdevs": 3, 00:20:36.292 "num_base_bdevs_discovered": 1, 00:20:36.292 "num_base_bdevs_operational": 3, 00:20:36.292 "base_bdevs_list": [ 00:20:36.292 { 00:20:36.292 "name": "BaseBdev1", 00:20:36.292 "uuid": "18077d35-9321-4c20-a979-966429c3d0e7", 00:20:36.292 "is_configured": true, 00:20:36.292 "data_offset": 0, 00:20:36.292 "data_size": 65536 00:20:36.292 }, 00:20:36.292 { 00:20:36.292 "name": "BaseBdev2", 00:20:36.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.292 "is_configured": false, 00:20:36.292 "data_offset": 0, 00:20:36.292 "data_size": 0 00:20:36.292 }, 00:20:36.292 { 00:20:36.292 "name": "BaseBdev3", 00:20:36.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.292 "is_configured": false, 00:20:36.292 "data_offset": 0, 00:20:36.292 "data_size": 0 00:20:36.292 } 00:20:36.292 ] 00:20:36.292 }' 00:20:36.292 12:52:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:36.292 12:52:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.551 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:36.551 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.551 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.810 [2024-12-05 12:52:19.142749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:36.810 BaseBdev2 00:20:36.810 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.810 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:36.810 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:36.810 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:36.810 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:36.810 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:36.810 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:36.810 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:36.810 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.810 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.810 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.810 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:36.810 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.810 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.810 [ 00:20:36.810 { 00:20:36.810 "name": "BaseBdev2", 00:20:36.810 "aliases": [ 00:20:36.810 "671bed1f-dad3-491c-a0f8-21569c58ec2b" 00:20:36.810 ], 00:20:36.810 "product_name": "Malloc disk", 00:20:36.810 "block_size": 512, 00:20:36.810 "num_blocks": 65536, 00:20:36.810 "uuid": "671bed1f-dad3-491c-a0f8-21569c58ec2b", 00:20:36.810 "assigned_rate_limits": { 00:20:36.810 "rw_ios_per_sec": 0, 00:20:36.810 "rw_mbytes_per_sec": 0, 00:20:36.810 "r_mbytes_per_sec": 0, 00:20:36.810 "w_mbytes_per_sec": 0 00:20:36.810 }, 00:20:36.810 "claimed": true, 00:20:36.810 "claim_type": "exclusive_write", 00:20:36.810 "zoned": false, 00:20:36.810 "supported_io_types": { 00:20:36.810 "read": true, 00:20:36.810 "write": true, 00:20:36.810 "unmap": true, 00:20:36.810 "flush": true, 00:20:36.810 "reset": true, 00:20:36.810 "nvme_admin": false, 00:20:36.810 "nvme_io": false, 00:20:36.810 "nvme_io_md": false, 00:20:36.810 "write_zeroes": true, 00:20:36.810 "zcopy": true, 00:20:36.810 "get_zone_info": false, 00:20:36.810 "zone_management": false, 00:20:36.810 "zone_append": false, 00:20:36.810 "compare": false, 00:20:36.810 "compare_and_write": false, 00:20:36.810 "abort": true, 00:20:36.810 "seek_hole": false, 00:20:36.810 "seek_data": false, 00:20:36.810 "copy": true, 00:20:36.810 "nvme_iov_md": false 00:20:36.810 }, 00:20:36.810 "memory_domains": [ 00:20:36.810 { 00:20:36.810 "dma_device_id": "system", 00:20:36.810 "dma_device_type": 1 00:20:36.810 }, 00:20:36.810 { 00:20:36.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.810 "dma_device_type": 2 00:20:36.810 } 00:20:36.810 ], 00:20:36.810 "driver_specific": {} 00:20:36.810 } 00:20:36.810 ] 00:20:36.810 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.810 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:36.810 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:36.810 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:36.810 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:36.810 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:36.810 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:36.810 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:36.810 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:36.811 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:36.811 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:36.811 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:36.811 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:36.811 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:36.811 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.811 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:36.811 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.811 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.811 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.811 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:36.811 "name": "Existed_Raid", 00:20:36.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.811 "strip_size_kb": 64, 00:20:36.811 "state": "configuring", 00:20:36.811 "raid_level": "concat", 00:20:36.811 "superblock": false, 00:20:36.811 "num_base_bdevs": 3, 00:20:36.811 "num_base_bdevs_discovered": 2, 00:20:36.811 "num_base_bdevs_operational": 3, 00:20:36.811 "base_bdevs_list": [ 00:20:36.811 { 00:20:36.811 "name": "BaseBdev1", 00:20:36.811 "uuid": "18077d35-9321-4c20-a979-966429c3d0e7", 00:20:36.811 "is_configured": true, 00:20:36.811 "data_offset": 0, 00:20:36.811 "data_size": 65536 00:20:36.811 }, 00:20:36.811 { 00:20:36.811 "name": "BaseBdev2", 00:20:36.811 "uuid": "671bed1f-dad3-491c-a0f8-21569c58ec2b", 00:20:36.811 "is_configured": true, 00:20:36.811 "data_offset": 0, 00:20:36.811 "data_size": 65536 00:20:36.811 }, 00:20:36.811 { 00:20:36.811 "name": "BaseBdev3", 00:20:36.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.811 "is_configured": false, 00:20:36.811 "data_offset": 0, 00:20:36.811 "data_size": 0 00:20:36.811 } 00:20:36.811 ] 00:20:36.811 }' 00:20:36.811 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:36.811 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.070 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:37.070 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.070 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.070 [2024-12-05 12:52:19.512904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:37.070 [2024-12-05 12:52:19.512948] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:37.070 [2024-12-05 12:52:19.512960] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:37.070 [2024-12-05 12:52:19.513215] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:37.070 [2024-12-05 12:52:19.513361] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:37.070 [2024-12-05 12:52:19.513370] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:37.070 [2024-12-05 12:52:19.513635] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:37.070 BaseBdev3 00:20:37.070 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.070 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:20:37.070 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:37.070 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:37.070 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:37.070 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:37.070 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:37.070 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:37.070 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.070 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.070 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.070 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:37.070 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.070 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.070 [ 00:20:37.070 { 00:20:37.070 "name": "BaseBdev3", 00:20:37.070 "aliases": [ 00:20:37.070 "db2cf7c0-3658-479f-ab0e-0a11b24accf6" 00:20:37.070 ], 00:20:37.070 "product_name": "Malloc disk", 00:20:37.070 "block_size": 512, 00:20:37.070 "num_blocks": 65536, 00:20:37.070 "uuid": "db2cf7c0-3658-479f-ab0e-0a11b24accf6", 00:20:37.070 "assigned_rate_limits": { 00:20:37.070 "rw_ios_per_sec": 0, 00:20:37.070 "rw_mbytes_per_sec": 0, 00:20:37.070 "r_mbytes_per_sec": 0, 00:20:37.070 "w_mbytes_per_sec": 0 00:20:37.070 }, 00:20:37.070 "claimed": true, 00:20:37.070 "claim_type": "exclusive_write", 00:20:37.070 "zoned": false, 00:20:37.070 "supported_io_types": { 00:20:37.070 "read": true, 00:20:37.070 "write": true, 00:20:37.070 "unmap": true, 00:20:37.070 "flush": true, 00:20:37.070 "reset": true, 00:20:37.070 "nvme_admin": false, 00:20:37.070 "nvme_io": false, 00:20:37.070 "nvme_io_md": false, 00:20:37.070 "write_zeroes": true, 00:20:37.070 "zcopy": true, 00:20:37.070 "get_zone_info": false, 00:20:37.070 "zone_management": false, 00:20:37.070 "zone_append": false, 00:20:37.070 "compare": false, 00:20:37.070 "compare_and_write": false, 00:20:37.070 "abort": true, 00:20:37.070 "seek_hole": false, 00:20:37.070 "seek_data": false, 00:20:37.070 "copy": true, 00:20:37.070 "nvme_iov_md": false 00:20:37.070 }, 00:20:37.070 "memory_domains": [ 00:20:37.070 { 00:20:37.070 "dma_device_id": "system", 00:20:37.070 "dma_device_type": 1 00:20:37.070 }, 00:20:37.070 { 00:20:37.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:37.071 "dma_device_type": 2 00:20:37.071 } 00:20:37.071 ], 00:20:37.071 "driver_specific": {} 00:20:37.071 } 00:20:37.071 ] 00:20:37.071 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.071 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:37.071 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:37.071 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:37.071 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:20:37.071 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:37.071 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:37.071 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:37.071 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:37.071 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:37.071 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:37.071 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:37.071 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:37.071 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:37.071 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.071 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:37.071 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.071 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.071 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.071 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:37.071 "name": "Existed_Raid", 00:20:37.071 "uuid": "ef32497d-a9f2-4811-b7f7-c841fea7e9be", 00:20:37.071 "strip_size_kb": 64, 00:20:37.071 "state": "online", 00:20:37.071 "raid_level": "concat", 00:20:37.071 "superblock": false, 00:20:37.071 "num_base_bdevs": 3, 00:20:37.071 "num_base_bdevs_discovered": 3, 00:20:37.071 "num_base_bdevs_operational": 3, 00:20:37.071 "base_bdevs_list": [ 00:20:37.071 { 00:20:37.071 "name": "BaseBdev1", 00:20:37.071 "uuid": "18077d35-9321-4c20-a979-966429c3d0e7", 00:20:37.071 "is_configured": true, 00:20:37.071 "data_offset": 0, 00:20:37.071 "data_size": 65536 00:20:37.071 }, 00:20:37.071 { 00:20:37.071 "name": "BaseBdev2", 00:20:37.071 "uuid": "671bed1f-dad3-491c-a0f8-21569c58ec2b", 00:20:37.071 "is_configured": true, 00:20:37.071 "data_offset": 0, 00:20:37.071 "data_size": 65536 00:20:37.071 }, 00:20:37.071 { 00:20:37.071 "name": "BaseBdev3", 00:20:37.071 "uuid": "db2cf7c0-3658-479f-ab0e-0a11b24accf6", 00:20:37.071 "is_configured": true, 00:20:37.071 "data_offset": 0, 00:20:37.071 "data_size": 65536 00:20:37.071 } 00:20:37.071 ] 00:20:37.071 }' 00:20:37.071 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:37.071 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.330 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:37.330 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:37.330 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:37.330 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:37.330 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:37.330 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:37.330 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:37.330 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:37.330 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.330 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.330 [2024-12-05 12:52:19.833350] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:37.330 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.330 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:37.330 "name": "Existed_Raid", 00:20:37.330 "aliases": [ 00:20:37.330 "ef32497d-a9f2-4811-b7f7-c841fea7e9be" 00:20:37.330 ], 00:20:37.330 "product_name": "Raid Volume", 00:20:37.330 "block_size": 512, 00:20:37.330 "num_blocks": 196608, 00:20:37.330 "uuid": "ef32497d-a9f2-4811-b7f7-c841fea7e9be", 00:20:37.330 "assigned_rate_limits": { 00:20:37.330 "rw_ios_per_sec": 0, 00:20:37.330 "rw_mbytes_per_sec": 0, 00:20:37.330 "r_mbytes_per_sec": 0, 00:20:37.331 "w_mbytes_per_sec": 0 00:20:37.331 }, 00:20:37.331 "claimed": false, 00:20:37.331 "zoned": false, 00:20:37.331 "supported_io_types": { 00:20:37.331 "read": true, 00:20:37.331 "write": true, 00:20:37.331 "unmap": true, 00:20:37.331 "flush": true, 00:20:37.331 "reset": true, 00:20:37.331 "nvme_admin": false, 00:20:37.331 "nvme_io": false, 00:20:37.331 "nvme_io_md": false, 00:20:37.331 "write_zeroes": true, 00:20:37.331 "zcopy": false, 00:20:37.331 "get_zone_info": false, 00:20:37.331 "zone_management": false, 00:20:37.331 "zone_append": false, 00:20:37.331 "compare": false, 00:20:37.331 "compare_and_write": false, 00:20:37.331 "abort": false, 00:20:37.331 "seek_hole": false, 00:20:37.331 "seek_data": false, 00:20:37.331 "copy": false, 00:20:37.331 "nvme_iov_md": false 00:20:37.331 }, 00:20:37.331 "memory_domains": [ 00:20:37.331 { 00:20:37.331 "dma_device_id": "system", 00:20:37.331 "dma_device_type": 1 00:20:37.331 }, 00:20:37.331 { 00:20:37.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:37.331 "dma_device_type": 2 00:20:37.331 }, 00:20:37.331 { 00:20:37.331 "dma_device_id": "system", 00:20:37.331 "dma_device_type": 1 00:20:37.331 }, 00:20:37.331 { 00:20:37.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:37.331 "dma_device_type": 2 00:20:37.331 }, 00:20:37.331 { 00:20:37.331 "dma_device_id": "system", 00:20:37.331 "dma_device_type": 1 00:20:37.331 }, 00:20:37.331 { 00:20:37.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:37.331 "dma_device_type": 2 00:20:37.331 } 00:20:37.331 ], 00:20:37.331 "driver_specific": { 00:20:37.331 "raid": { 00:20:37.331 "uuid": "ef32497d-a9f2-4811-b7f7-c841fea7e9be", 00:20:37.331 "strip_size_kb": 64, 00:20:37.331 "state": "online", 00:20:37.331 "raid_level": "concat", 00:20:37.331 "superblock": false, 00:20:37.331 "num_base_bdevs": 3, 00:20:37.331 "num_base_bdevs_discovered": 3, 00:20:37.331 "num_base_bdevs_operational": 3, 00:20:37.331 "base_bdevs_list": [ 00:20:37.331 { 00:20:37.331 "name": "BaseBdev1", 00:20:37.331 "uuid": "18077d35-9321-4c20-a979-966429c3d0e7", 00:20:37.331 "is_configured": true, 00:20:37.331 "data_offset": 0, 00:20:37.331 "data_size": 65536 00:20:37.331 }, 00:20:37.331 { 00:20:37.331 "name": "BaseBdev2", 00:20:37.331 "uuid": "671bed1f-dad3-491c-a0f8-21569c58ec2b", 00:20:37.331 "is_configured": true, 00:20:37.331 "data_offset": 0, 00:20:37.331 "data_size": 65536 00:20:37.331 }, 00:20:37.331 { 00:20:37.331 "name": "BaseBdev3", 00:20:37.331 "uuid": "db2cf7c0-3658-479f-ab0e-0a11b24accf6", 00:20:37.331 "is_configured": true, 00:20:37.331 "data_offset": 0, 00:20:37.331 "data_size": 65536 00:20:37.331 } 00:20:37.331 ] 00:20:37.331 } 00:20:37.331 } 00:20:37.331 }' 00:20:37.331 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:37.331 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:37.331 BaseBdev2 00:20:37.331 BaseBdev3' 00:20:37.331 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:37.590 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:37.590 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:37.590 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:37.590 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:37.590 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.590 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.590 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.590 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:37.590 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:37.590 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:37.590 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:37.590 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:37.590 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.590 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.590 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.590 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:37.590 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:37.590 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:37.590 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:37.590 12:52:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:37.590 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.590 12:52:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.590 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.590 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:37.590 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:37.590 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:37.590 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.590 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.590 [2024-12-05 12:52:20.025144] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:37.590 [2024-12-05 12:52:20.025289] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:37.590 [2024-12-05 12:52:20.025365] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:37.590 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.590 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:37.590 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:20:37.590 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:37.590 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:20:37.590 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:20:37.590 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:20:37.590 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:37.590 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:20:37.590 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:37.590 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:37.590 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:37.590 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:37.590 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:37.590 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:37.590 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:37.590 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:37.590 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.590 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.590 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.590 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.590 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:37.590 "name": "Existed_Raid", 00:20:37.590 "uuid": "ef32497d-a9f2-4811-b7f7-c841fea7e9be", 00:20:37.590 "strip_size_kb": 64, 00:20:37.590 "state": "offline", 00:20:37.590 "raid_level": "concat", 00:20:37.590 "superblock": false, 00:20:37.590 "num_base_bdevs": 3, 00:20:37.590 "num_base_bdevs_discovered": 2, 00:20:37.590 "num_base_bdevs_operational": 2, 00:20:37.590 "base_bdevs_list": [ 00:20:37.590 { 00:20:37.590 "name": null, 00:20:37.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.590 "is_configured": false, 00:20:37.590 "data_offset": 0, 00:20:37.590 "data_size": 65536 00:20:37.590 }, 00:20:37.590 { 00:20:37.590 "name": "BaseBdev2", 00:20:37.590 "uuid": "671bed1f-dad3-491c-a0f8-21569c58ec2b", 00:20:37.590 "is_configured": true, 00:20:37.590 "data_offset": 0, 00:20:37.590 "data_size": 65536 00:20:37.590 }, 00:20:37.590 { 00:20:37.590 "name": "BaseBdev3", 00:20:37.590 "uuid": "db2cf7c0-3658-479f-ab0e-0a11b24accf6", 00:20:37.590 "is_configured": true, 00:20:37.590 "data_offset": 0, 00:20:37.590 "data_size": 65536 00:20:37.590 } 00:20:37.590 ] 00:20:37.590 }' 00:20:37.590 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:37.590 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.851 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:37.851 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:37.851 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.851 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.851 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.851 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:37.851 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.851 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:37.851 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:37.851 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:37.851 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.851 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.113 [2024-12-05 12:52:20.434991] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.113 [2024-12-05 12:52:20.521808] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:38.113 [2024-12-05 12:52:20.521852] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.113 BaseBdev2 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.113 [ 00:20:38.113 { 00:20:38.113 "name": "BaseBdev2", 00:20:38.113 "aliases": [ 00:20:38.113 "57bf04fc-9d84-4d26-803c-8403c8c4f7c0" 00:20:38.113 ], 00:20:38.113 "product_name": "Malloc disk", 00:20:38.113 "block_size": 512, 00:20:38.113 "num_blocks": 65536, 00:20:38.113 "uuid": "57bf04fc-9d84-4d26-803c-8403c8c4f7c0", 00:20:38.113 "assigned_rate_limits": { 00:20:38.113 "rw_ios_per_sec": 0, 00:20:38.113 "rw_mbytes_per_sec": 0, 00:20:38.113 "r_mbytes_per_sec": 0, 00:20:38.113 "w_mbytes_per_sec": 0 00:20:38.113 }, 00:20:38.113 "claimed": false, 00:20:38.113 "zoned": false, 00:20:38.113 "supported_io_types": { 00:20:38.113 "read": true, 00:20:38.113 "write": true, 00:20:38.113 "unmap": true, 00:20:38.113 "flush": true, 00:20:38.113 "reset": true, 00:20:38.113 "nvme_admin": false, 00:20:38.113 "nvme_io": false, 00:20:38.113 "nvme_io_md": false, 00:20:38.113 "write_zeroes": true, 00:20:38.113 "zcopy": true, 00:20:38.113 "get_zone_info": false, 00:20:38.113 "zone_management": false, 00:20:38.113 "zone_append": false, 00:20:38.113 "compare": false, 00:20:38.113 "compare_and_write": false, 00:20:38.113 "abort": true, 00:20:38.113 "seek_hole": false, 00:20:38.113 "seek_data": false, 00:20:38.113 "copy": true, 00:20:38.113 "nvme_iov_md": false 00:20:38.113 }, 00:20:38.113 "memory_domains": [ 00:20:38.113 { 00:20:38.113 "dma_device_id": "system", 00:20:38.113 "dma_device_type": 1 00:20:38.113 }, 00:20:38.113 { 00:20:38.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:38.113 "dma_device_type": 2 00:20:38.113 } 00:20:38.113 ], 00:20:38.113 "driver_specific": {} 00:20:38.113 } 00:20:38.113 ] 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.113 BaseBdev3 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.113 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.113 [ 00:20:38.113 { 00:20:38.113 "name": "BaseBdev3", 00:20:38.113 "aliases": [ 00:20:38.113 "40f44653-975b-45bb-9ca8-c3f4d9bc18ad" 00:20:38.113 ], 00:20:38.113 "product_name": "Malloc disk", 00:20:38.114 "block_size": 512, 00:20:38.114 "num_blocks": 65536, 00:20:38.114 "uuid": "40f44653-975b-45bb-9ca8-c3f4d9bc18ad", 00:20:38.114 "assigned_rate_limits": { 00:20:38.114 "rw_ios_per_sec": 0, 00:20:38.114 "rw_mbytes_per_sec": 0, 00:20:38.114 "r_mbytes_per_sec": 0, 00:20:38.114 "w_mbytes_per_sec": 0 00:20:38.114 }, 00:20:38.114 "claimed": false, 00:20:38.114 "zoned": false, 00:20:38.114 "supported_io_types": { 00:20:38.114 "read": true, 00:20:38.114 "write": true, 00:20:38.114 "unmap": true, 00:20:38.114 "flush": true, 00:20:38.114 "reset": true, 00:20:38.114 "nvme_admin": false, 00:20:38.114 "nvme_io": false, 00:20:38.114 "nvme_io_md": false, 00:20:38.114 "write_zeroes": true, 00:20:38.114 "zcopy": true, 00:20:38.114 "get_zone_info": false, 00:20:38.114 "zone_management": false, 00:20:38.114 "zone_append": false, 00:20:38.114 "compare": false, 00:20:38.114 "compare_and_write": false, 00:20:38.114 "abort": true, 00:20:38.114 "seek_hole": false, 00:20:38.114 "seek_data": false, 00:20:38.114 "copy": true, 00:20:38.114 "nvme_iov_md": false 00:20:38.114 }, 00:20:38.114 "memory_domains": [ 00:20:38.114 { 00:20:38.114 "dma_device_id": "system", 00:20:38.114 "dma_device_type": 1 00:20:38.114 }, 00:20:38.114 { 00:20:38.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:38.417 "dma_device_type": 2 00:20:38.417 } 00:20:38.417 ], 00:20:38.417 "driver_specific": {} 00:20:38.417 } 00:20:38.417 ] 00:20:38.417 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.417 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:38.417 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:38.417 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:38.417 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:38.417 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.417 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.417 [2024-12-05 12:52:20.699723] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:38.417 [2024-12-05 12:52:20.699761] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:38.417 [2024-12-05 12:52:20.699779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:38.417 [2024-12-05 12:52:20.701281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:38.417 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.417 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:38.417 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:38.417 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:38.417 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:38.417 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:38.417 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:38.417 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:38.417 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:38.417 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:38.417 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:38.417 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.417 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.417 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.417 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:38.417 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.417 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:38.417 "name": "Existed_Raid", 00:20:38.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.417 "strip_size_kb": 64, 00:20:38.417 "state": "configuring", 00:20:38.417 "raid_level": "concat", 00:20:38.417 "superblock": false, 00:20:38.417 "num_base_bdevs": 3, 00:20:38.417 "num_base_bdevs_discovered": 2, 00:20:38.417 "num_base_bdevs_operational": 3, 00:20:38.417 "base_bdevs_list": [ 00:20:38.417 { 00:20:38.417 "name": "BaseBdev1", 00:20:38.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.417 "is_configured": false, 00:20:38.417 "data_offset": 0, 00:20:38.417 "data_size": 0 00:20:38.417 }, 00:20:38.417 { 00:20:38.417 "name": "BaseBdev2", 00:20:38.417 "uuid": "57bf04fc-9d84-4d26-803c-8403c8c4f7c0", 00:20:38.417 "is_configured": true, 00:20:38.417 "data_offset": 0, 00:20:38.417 "data_size": 65536 00:20:38.417 }, 00:20:38.417 { 00:20:38.417 "name": "BaseBdev3", 00:20:38.417 "uuid": "40f44653-975b-45bb-9ca8-c3f4d9bc18ad", 00:20:38.417 "is_configured": true, 00:20:38.417 "data_offset": 0, 00:20:38.417 "data_size": 65536 00:20:38.417 } 00:20:38.417 ] 00:20:38.417 }' 00:20:38.417 12:52:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:38.417 12:52:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.697 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:38.697 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.697 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.697 [2024-12-05 12:52:21.007790] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:38.697 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.697 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:38.697 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:38.697 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:38.697 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:38.697 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:38.697 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:38.697 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:38.697 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:38.697 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:38.697 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:38.697 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.697 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.697 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:38.697 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.697 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.697 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:38.697 "name": "Existed_Raid", 00:20:38.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.697 "strip_size_kb": 64, 00:20:38.697 "state": "configuring", 00:20:38.697 "raid_level": "concat", 00:20:38.697 "superblock": false, 00:20:38.697 "num_base_bdevs": 3, 00:20:38.697 "num_base_bdevs_discovered": 1, 00:20:38.697 "num_base_bdevs_operational": 3, 00:20:38.697 "base_bdevs_list": [ 00:20:38.697 { 00:20:38.697 "name": "BaseBdev1", 00:20:38.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.697 "is_configured": false, 00:20:38.697 "data_offset": 0, 00:20:38.697 "data_size": 0 00:20:38.697 }, 00:20:38.697 { 00:20:38.697 "name": null, 00:20:38.697 "uuid": "57bf04fc-9d84-4d26-803c-8403c8c4f7c0", 00:20:38.697 "is_configured": false, 00:20:38.697 "data_offset": 0, 00:20:38.697 "data_size": 65536 00:20:38.697 }, 00:20:38.697 { 00:20:38.697 "name": "BaseBdev3", 00:20:38.697 "uuid": "40f44653-975b-45bb-9ca8-c3f4d9bc18ad", 00:20:38.697 "is_configured": true, 00:20:38.697 "data_offset": 0, 00:20:38.697 "data_size": 65536 00:20:38.697 } 00:20:38.697 ] 00:20:38.697 }' 00:20:38.697 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:38.697 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.958 [2024-12-05 12:52:21.382450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:38.958 BaseBdev1 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.958 [ 00:20:38.958 { 00:20:38.958 "name": "BaseBdev1", 00:20:38.958 "aliases": [ 00:20:38.958 "c3ffcda8-7bda-4083-a231-3ba1d0002cbc" 00:20:38.958 ], 00:20:38.958 "product_name": "Malloc disk", 00:20:38.958 "block_size": 512, 00:20:38.958 "num_blocks": 65536, 00:20:38.958 "uuid": "c3ffcda8-7bda-4083-a231-3ba1d0002cbc", 00:20:38.958 "assigned_rate_limits": { 00:20:38.958 "rw_ios_per_sec": 0, 00:20:38.958 "rw_mbytes_per_sec": 0, 00:20:38.958 "r_mbytes_per_sec": 0, 00:20:38.958 "w_mbytes_per_sec": 0 00:20:38.958 }, 00:20:38.958 "claimed": true, 00:20:38.958 "claim_type": "exclusive_write", 00:20:38.958 "zoned": false, 00:20:38.958 "supported_io_types": { 00:20:38.958 "read": true, 00:20:38.958 "write": true, 00:20:38.958 "unmap": true, 00:20:38.958 "flush": true, 00:20:38.958 "reset": true, 00:20:38.958 "nvme_admin": false, 00:20:38.958 "nvme_io": false, 00:20:38.958 "nvme_io_md": false, 00:20:38.958 "write_zeroes": true, 00:20:38.958 "zcopy": true, 00:20:38.958 "get_zone_info": false, 00:20:38.958 "zone_management": false, 00:20:38.958 "zone_append": false, 00:20:38.958 "compare": false, 00:20:38.958 "compare_and_write": false, 00:20:38.958 "abort": true, 00:20:38.958 "seek_hole": false, 00:20:38.958 "seek_data": false, 00:20:38.958 "copy": true, 00:20:38.958 "nvme_iov_md": false 00:20:38.958 }, 00:20:38.958 "memory_domains": [ 00:20:38.958 { 00:20:38.958 "dma_device_id": "system", 00:20:38.958 "dma_device_type": 1 00:20:38.958 }, 00:20:38.958 { 00:20:38.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:38.958 "dma_device_type": 2 00:20:38.958 } 00:20:38.958 ], 00:20:38.958 "driver_specific": {} 00:20:38.958 } 00:20:38.958 ] 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:38.958 "name": "Existed_Raid", 00:20:38.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.958 "strip_size_kb": 64, 00:20:38.958 "state": "configuring", 00:20:38.958 "raid_level": "concat", 00:20:38.958 "superblock": false, 00:20:38.958 "num_base_bdevs": 3, 00:20:38.958 "num_base_bdevs_discovered": 2, 00:20:38.958 "num_base_bdevs_operational": 3, 00:20:38.958 "base_bdevs_list": [ 00:20:38.958 { 00:20:38.958 "name": "BaseBdev1", 00:20:38.958 "uuid": "c3ffcda8-7bda-4083-a231-3ba1d0002cbc", 00:20:38.958 "is_configured": true, 00:20:38.958 "data_offset": 0, 00:20:38.958 "data_size": 65536 00:20:38.958 }, 00:20:38.958 { 00:20:38.958 "name": null, 00:20:38.958 "uuid": "57bf04fc-9d84-4d26-803c-8403c8c4f7c0", 00:20:38.958 "is_configured": false, 00:20:38.958 "data_offset": 0, 00:20:38.958 "data_size": 65536 00:20:38.958 }, 00:20:38.958 { 00:20:38.958 "name": "BaseBdev3", 00:20:38.958 "uuid": "40f44653-975b-45bb-9ca8-c3f4d9bc18ad", 00:20:38.958 "is_configured": true, 00:20:38.958 "data_offset": 0, 00:20:38.958 "data_size": 65536 00:20:38.958 } 00:20:38.958 ] 00:20:38.958 }' 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:38.958 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.218 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:39.218 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.218 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.218 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.218 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.218 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:20:39.218 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:20:39.218 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.218 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.218 [2024-12-05 12:52:21.750573] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:39.218 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.218 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:39.218 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:39.218 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:39.218 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:39.218 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:39.218 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:39.218 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:39.219 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:39.219 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:39.219 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:39.219 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.219 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:39.219 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.219 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.219 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.219 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:39.219 "name": "Existed_Raid", 00:20:39.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.219 "strip_size_kb": 64, 00:20:39.219 "state": "configuring", 00:20:39.219 "raid_level": "concat", 00:20:39.219 "superblock": false, 00:20:39.219 "num_base_bdevs": 3, 00:20:39.219 "num_base_bdevs_discovered": 1, 00:20:39.219 "num_base_bdevs_operational": 3, 00:20:39.219 "base_bdevs_list": [ 00:20:39.219 { 00:20:39.219 "name": "BaseBdev1", 00:20:39.219 "uuid": "c3ffcda8-7bda-4083-a231-3ba1d0002cbc", 00:20:39.219 "is_configured": true, 00:20:39.219 "data_offset": 0, 00:20:39.219 "data_size": 65536 00:20:39.219 }, 00:20:39.219 { 00:20:39.219 "name": null, 00:20:39.219 "uuid": "57bf04fc-9d84-4d26-803c-8403c8c4f7c0", 00:20:39.219 "is_configured": false, 00:20:39.219 "data_offset": 0, 00:20:39.219 "data_size": 65536 00:20:39.219 }, 00:20:39.219 { 00:20:39.219 "name": null, 00:20:39.219 "uuid": "40f44653-975b-45bb-9ca8-c3f4d9bc18ad", 00:20:39.219 "is_configured": false, 00:20:39.219 "data_offset": 0, 00:20:39.219 "data_size": 65536 00:20:39.219 } 00:20:39.219 ] 00:20:39.219 }' 00:20:39.219 12:52:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:39.219 12:52:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.787 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:39.787 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.787 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.787 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.787 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.787 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:20:39.787 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:39.787 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.787 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.787 [2024-12-05 12:52:22.102658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:39.787 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.787 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:39.787 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:39.787 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:39.787 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:39.787 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:39.787 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:39.787 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:39.787 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:39.787 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:39.787 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:39.788 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:39.788 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.788 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.788 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.788 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.788 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:39.788 "name": "Existed_Raid", 00:20:39.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.788 "strip_size_kb": 64, 00:20:39.788 "state": "configuring", 00:20:39.788 "raid_level": "concat", 00:20:39.788 "superblock": false, 00:20:39.788 "num_base_bdevs": 3, 00:20:39.788 "num_base_bdevs_discovered": 2, 00:20:39.788 "num_base_bdevs_operational": 3, 00:20:39.788 "base_bdevs_list": [ 00:20:39.788 { 00:20:39.788 "name": "BaseBdev1", 00:20:39.788 "uuid": "c3ffcda8-7bda-4083-a231-3ba1d0002cbc", 00:20:39.788 "is_configured": true, 00:20:39.788 "data_offset": 0, 00:20:39.788 "data_size": 65536 00:20:39.788 }, 00:20:39.788 { 00:20:39.788 "name": null, 00:20:39.788 "uuid": "57bf04fc-9d84-4d26-803c-8403c8c4f7c0", 00:20:39.788 "is_configured": false, 00:20:39.788 "data_offset": 0, 00:20:39.788 "data_size": 65536 00:20:39.788 }, 00:20:39.788 { 00:20:39.788 "name": "BaseBdev3", 00:20:39.788 "uuid": "40f44653-975b-45bb-9ca8-c3f4d9bc18ad", 00:20:39.788 "is_configured": true, 00:20:39.788 "data_offset": 0, 00:20:39.788 "data_size": 65536 00:20:39.788 } 00:20:39.788 ] 00:20:39.788 }' 00:20:39.788 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:39.788 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.048 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.048 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:40.048 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.048 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.048 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.048 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:20:40.048 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:40.048 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.048 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.048 [2024-12-05 12:52:22.450740] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:40.048 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.048 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:40.048 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:40.048 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:40.048 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:40.048 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:40.048 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:40.048 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:40.048 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:40.048 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:40.048 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:40.048 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.048 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:40.048 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.048 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.048 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.048 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:40.048 "name": "Existed_Raid", 00:20:40.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.048 "strip_size_kb": 64, 00:20:40.048 "state": "configuring", 00:20:40.048 "raid_level": "concat", 00:20:40.048 "superblock": false, 00:20:40.048 "num_base_bdevs": 3, 00:20:40.048 "num_base_bdevs_discovered": 1, 00:20:40.048 "num_base_bdevs_operational": 3, 00:20:40.048 "base_bdevs_list": [ 00:20:40.048 { 00:20:40.048 "name": null, 00:20:40.048 "uuid": "c3ffcda8-7bda-4083-a231-3ba1d0002cbc", 00:20:40.048 "is_configured": false, 00:20:40.048 "data_offset": 0, 00:20:40.048 "data_size": 65536 00:20:40.048 }, 00:20:40.048 { 00:20:40.048 "name": null, 00:20:40.048 "uuid": "57bf04fc-9d84-4d26-803c-8403c8c4f7c0", 00:20:40.048 "is_configured": false, 00:20:40.048 "data_offset": 0, 00:20:40.048 "data_size": 65536 00:20:40.048 }, 00:20:40.048 { 00:20:40.048 "name": "BaseBdev3", 00:20:40.048 "uuid": "40f44653-975b-45bb-9ca8-c3f4d9bc18ad", 00:20:40.048 "is_configured": true, 00:20:40.048 "data_offset": 0, 00:20:40.048 "data_size": 65536 00:20:40.048 } 00:20:40.048 ] 00:20:40.048 }' 00:20:40.048 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:40.048 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.309 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.309 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:40.309 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.309 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.309 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.309 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:20:40.309 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:40.309 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.309 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.309 [2024-12-05 12:52:22.849540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:40.309 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.309 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:40.310 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:40.310 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:40.310 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:40.310 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:40.310 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:40.310 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:40.310 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:40.310 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:40.310 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:40.310 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.310 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:40.310 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.310 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.310 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.310 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:40.310 "name": "Existed_Raid", 00:20:40.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.310 "strip_size_kb": 64, 00:20:40.310 "state": "configuring", 00:20:40.310 "raid_level": "concat", 00:20:40.310 "superblock": false, 00:20:40.310 "num_base_bdevs": 3, 00:20:40.310 "num_base_bdevs_discovered": 2, 00:20:40.310 "num_base_bdevs_operational": 3, 00:20:40.310 "base_bdevs_list": [ 00:20:40.310 { 00:20:40.310 "name": null, 00:20:40.310 "uuid": "c3ffcda8-7bda-4083-a231-3ba1d0002cbc", 00:20:40.310 "is_configured": false, 00:20:40.310 "data_offset": 0, 00:20:40.310 "data_size": 65536 00:20:40.310 }, 00:20:40.310 { 00:20:40.310 "name": "BaseBdev2", 00:20:40.310 "uuid": "57bf04fc-9d84-4d26-803c-8403c8c4f7c0", 00:20:40.310 "is_configured": true, 00:20:40.310 "data_offset": 0, 00:20:40.310 "data_size": 65536 00:20:40.310 }, 00:20:40.310 { 00:20:40.310 "name": "BaseBdev3", 00:20:40.310 "uuid": "40f44653-975b-45bb-9ca8-c3f4d9bc18ad", 00:20:40.310 "is_configured": true, 00:20:40.310 "data_offset": 0, 00:20:40.310 "data_size": 65536 00:20:40.310 } 00:20:40.310 ] 00:20:40.310 }' 00:20:40.310 12:52:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:40.310 12:52:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c3ffcda8-7bda-4083-a231-3ba1d0002cbc 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.879 [2024-12-05 12:52:23.228079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:40.879 [2024-12-05 12:52:23.228120] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:40.879 [2024-12-05 12:52:23.228128] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:40.879 [2024-12-05 12:52:23.228327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:40.879 [2024-12-05 12:52:23.228436] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:40.879 [2024-12-05 12:52:23.228443] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:20:40.879 [2024-12-05 12:52:23.228628] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:40.879 NewBaseBdev 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.879 [ 00:20:40.879 { 00:20:40.879 "name": "NewBaseBdev", 00:20:40.879 "aliases": [ 00:20:40.879 "c3ffcda8-7bda-4083-a231-3ba1d0002cbc" 00:20:40.879 ], 00:20:40.879 "product_name": "Malloc disk", 00:20:40.879 "block_size": 512, 00:20:40.879 "num_blocks": 65536, 00:20:40.879 "uuid": "c3ffcda8-7bda-4083-a231-3ba1d0002cbc", 00:20:40.879 "assigned_rate_limits": { 00:20:40.879 "rw_ios_per_sec": 0, 00:20:40.879 "rw_mbytes_per_sec": 0, 00:20:40.879 "r_mbytes_per_sec": 0, 00:20:40.879 "w_mbytes_per_sec": 0 00:20:40.879 }, 00:20:40.879 "claimed": true, 00:20:40.879 "claim_type": "exclusive_write", 00:20:40.879 "zoned": false, 00:20:40.879 "supported_io_types": { 00:20:40.879 "read": true, 00:20:40.879 "write": true, 00:20:40.879 "unmap": true, 00:20:40.879 "flush": true, 00:20:40.879 "reset": true, 00:20:40.879 "nvme_admin": false, 00:20:40.879 "nvme_io": false, 00:20:40.879 "nvme_io_md": false, 00:20:40.879 "write_zeroes": true, 00:20:40.879 "zcopy": true, 00:20:40.879 "get_zone_info": false, 00:20:40.879 "zone_management": false, 00:20:40.879 "zone_append": false, 00:20:40.879 "compare": false, 00:20:40.879 "compare_and_write": false, 00:20:40.879 "abort": true, 00:20:40.879 "seek_hole": false, 00:20:40.879 "seek_data": false, 00:20:40.879 "copy": true, 00:20:40.879 "nvme_iov_md": false 00:20:40.879 }, 00:20:40.879 "memory_domains": [ 00:20:40.879 { 00:20:40.879 "dma_device_id": "system", 00:20:40.879 "dma_device_type": 1 00:20:40.879 }, 00:20:40.879 { 00:20:40.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.879 "dma_device_type": 2 00:20:40.879 } 00:20:40.879 ], 00:20:40.879 "driver_specific": {} 00:20:40.879 } 00:20:40.879 ] 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:40.879 "name": "Existed_Raid", 00:20:40.879 "uuid": "9fee8a6a-72e4-4416-9f83-d619d1506b8c", 00:20:40.879 "strip_size_kb": 64, 00:20:40.879 "state": "online", 00:20:40.879 "raid_level": "concat", 00:20:40.879 "superblock": false, 00:20:40.879 "num_base_bdevs": 3, 00:20:40.879 "num_base_bdevs_discovered": 3, 00:20:40.879 "num_base_bdevs_operational": 3, 00:20:40.879 "base_bdevs_list": [ 00:20:40.879 { 00:20:40.879 "name": "NewBaseBdev", 00:20:40.879 "uuid": "c3ffcda8-7bda-4083-a231-3ba1d0002cbc", 00:20:40.879 "is_configured": true, 00:20:40.879 "data_offset": 0, 00:20:40.879 "data_size": 65536 00:20:40.879 }, 00:20:40.879 { 00:20:40.879 "name": "BaseBdev2", 00:20:40.879 "uuid": "57bf04fc-9d84-4d26-803c-8403c8c4f7c0", 00:20:40.879 "is_configured": true, 00:20:40.879 "data_offset": 0, 00:20:40.879 "data_size": 65536 00:20:40.879 }, 00:20:40.879 { 00:20:40.879 "name": "BaseBdev3", 00:20:40.879 "uuid": "40f44653-975b-45bb-9ca8-c3f4d9bc18ad", 00:20:40.879 "is_configured": true, 00:20:40.879 "data_offset": 0, 00:20:40.879 "data_size": 65536 00:20:40.879 } 00:20:40.879 ] 00:20:40.879 }' 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:40.879 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.140 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:20:41.140 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:41.140 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:41.140 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:41.140 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:41.140 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:41.140 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:41.140 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:41.140 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.140 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.140 [2024-12-05 12:52:23.560443] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:41.140 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.140 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:41.140 "name": "Existed_Raid", 00:20:41.140 "aliases": [ 00:20:41.140 "9fee8a6a-72e4-4416-9f83-d619d1506b8c" 00:20:41.140 ], 00:20:41.140 "product_name": "Raid Volume", 00:20:41.140 "block_size": 512, 00:20:41.140 "num_blocks": 196608, 00:20:41.140 "uuid": "9fee8a6a-72e4-4416-9f83-d619d1506b8c", 00:20:41.140 "assigned_rate_limits": { 00:20:41.140 "rw_ios_per_sec": 0, 00:20:41.140 "rw_mbytes_per_sec": 0, 00:20:41.140 "r_mbytes_per_sec": 0, 00:20:41.140 "w_mbytes_per_sec": 0 00:20:41.140 }, 00:20:41.140 "claimed": false, 00:20:41.140 "zoned": false, 00:20:41.140 "supported_io_types": { 00:20:41.140 "read": true, 00:20:41.140 "write": true, 00:20:41.140 "unmap": true, 00:20:41.140 "flush": true, 00:20:41.140 "reset": true, 00:20:41.140 "nvme_admin": false, 00:20:41.140 "nvme_io": false, 00:20:41.140 "nvme_io_md": false, 00:20:41.140 "write_zeroes": true, 00:20:41.140 "zcopy": false, 00:20:41.140 "get_zone_info": false, 00:20:41.140 "zone_management": false, 00:20:41.140 "zone_append": false, 00:20:41.140 "compare": false, 00:20:41.140 "compare_and_write": false, 00:20:41.140 "abort": false, 00:20:41.140 "seek_hole": false, 00:20:41.140 "seek_data": false, 00:20:41.140 "copy": false, 00:20:41.140 "nvme_iov_md": false 00:20:41.140 }, 00:20:41.140 "memory_domains": [ 00:20:41.140 { 00:20:41.140 "dma_device_id": "system", 00:20:41.140 "dma_device_type": 1 00:20:41.140 }, 00:20:41.140 { 00:20:41.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:41.140 "dma_device_type": 2 00:20:41.140 }, 00:20:41.140 { 00:20:41.140 "dma_device_id": "system", 00:20:41.140 "dma_device_type": 1 00:20:41.140 }, 00:20:41.140 { 00:20:41.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:41.140 "dma_device_type": 2 00:20:41.140 }, 00:20:41.140 { 00:20:41.140 "dma_device_id": "system", 00:20:41.140 "dma_device_type": 1 00:20:41.140 }, 00:20:41.140 { 00:20:41.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:41.140 "dma_device_type": 2 00:20:41.140 } 00:20:41.140 ], 00:20:41.140 "driver_specific": { 00:20:41.140 "raid": { 00:20:41.140 "uuid": "9fee8a6a-72e4-4416-9f83-d619d1506b8c", 00:20:41.140 "strip_size_kb": 64, 00:20:41.140 "state": "online", 00:20:41.140 "raid_level": "concat", 00:20:41.140 "superblock": false, 00:20:41.140 "num_base_bdevs": 3, 00:20:41.140 "num_base_bdevs_discovered": 3, 00:20:41.140 "num_base_bdevs_operational": 3, 00:20:41.140 "base_bdevs_list": [ 00:20:41.140 { 00:20:41.140 "name": "NewBaseBdev", 00:20:41.140 "uuid": "c3ffcda8-7bda-4083-a231-3ba1d0002cbc", 00:20:41.140 "is_configured": true, 00:20:41.140 "data_offset": 0, 00:20:41.140 "data_size": 65536 00:20:41.140 }, 00:20:41.140 { 00:20:41.140 "name": "BaseBdev2", 00:20:41.141 "uuid": "57bf04fc-9d84-4d26-803c-8403c8c4f7c0", 00:20:41.141 "is_configured": true, 00:20:41.141 "data_offset": 0, 00:20:41.141 "data_size": 65536 00:20:41.141 }, 00:20:41.141 { 00:20:41.141 "name": "BaseBdev3", 00:20:41.141 "uuid": "40f44653-975b-45bb-9ca8-c3f4d9bc18ad", 00:20:41.141 "is_configured": true, 00:20:41.141 "data_offset": 0, 00:20:41.141 "data_size": 65536 00:20:41.141 } 00:20:41.141 ] 00:20:41.141 } 00:20:41.141 } 00:20:41.141 }' 00:20:41.141 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:41.141 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:20:41.141 BaseBdev2 00:20:41.141 BaseBdev3' 00:20:41.141 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:41.141 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:41.141 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:41.141 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:41.141 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:20:41.141 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.141 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.141 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.141 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:41.141 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:41.141 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:41.141 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:41.141 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.141 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.141 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:41.141 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.141 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:41.141 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:41.141 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:41.141 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:41.141 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:41.141 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.141 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.400 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.400 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:41.400 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:41.400 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:41.400 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.400 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.400 [2024-12-05 12:52:23.740210] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:41.400 [2024-12-05 12:52:23.740237] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:41.400 [2024-12-05 12:52:23.740296] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:41.400 [2024-12-05 12:52:23.740349] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:41.400 [2024-12-05 12:52:23.740358] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:20:41.400 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.400 12:52:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63947 00:20:41.400 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63947 ']' 00:20:41.400 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63947 00:20:41.400 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:20:41.400 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:41.400 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63947 00:20:41.400 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:41.400 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:41.400 killing process with pid 63947 00:20:41.400 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63947' 00:20:41.400 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63947 00:20:41.400 [2024-12-05 12:52:23.771038] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:41.400 12:52:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63947 00:20:41.400 [2024-12-05 12:52:23.918022] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:20:41.969 00:20:41.969 real 0m7.282s 00:20:41.969 user 0m11.770s 00:20:41.969 sys 0m1.178s 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:41.969 ************************************ 00:20:41.969 END TEST raid_state_function_test 00:20:41.969 ************************************ 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.969 12:52:24 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:20:41.969 12:52:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:41.969 12:52:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:41.969 12:52:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:41.969 ************************************ 00:20:41.969 START TEST raid_state_function_test_sb 00:20:41.969 ************************************ 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64541 00:20:41.969 Process raid pid: 64541 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64541' 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64541 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64541 ']' 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:41.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:41.969 12:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.229 [2024-12-05 12:52:24.605369] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:20:42.229 [2024-12-05 12:52:24.605500] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.229 [2024-12-05 12:52:24.763160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.488 [2024-12-05 12:52:24.865040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.488 [2024-12-05 12:52:25.000851] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:42.488 [2024-12-05 12:52:25.000892] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:43.054 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:43.054 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:20:43.054 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:43.054 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.054 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.054 [2024-12-05 12:52:25.459648] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:43.054 [2024-12-05 12:52:25.459703] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:43.054 [2024-12-05 12:52:25.459713] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:43.054 [2024-12-05 12:52:25.459722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:43.054 [2024-12-05 12:52:25.459729] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:43.054 [2024-12-05 12:52:25.459738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:43.054 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.054 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:43.054 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:43.054 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:43.054 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:43.054 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:43.054 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:43.054 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:43.054 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:43.054 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:43.054 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:43.054 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.054 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:43.054 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.054 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.054 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.054 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:43.054 "name": "Existed_Raid", 00:20:43.054 "uuid": "51f32e9f-b738-4014-a720-0af36d684c7b", 00:20:43.054 "strip_size_kb": 64, 00:20:43.054 "state": "configuring", 00:20:43.054 "raid_level": "concat", 00:20:43.054 "superblock": true, 00:20:43.054 "num_base_bdevs": 3, 00:20:43.054 "num_base_bdevs_discovered": 0, 00:20:43.054 "num_base_bdevs_operational": 3, 00:20:43.054 "base_bdevs_list": [ 00:20:43.054 { 00:20:43.054 "name": "BaseBdev1", 00:20:43.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.054 "is_configured": false, 00:20:43.054 "data_offset": 0, 00:20:43.054 "data_size": 0 00:20:43.054 }, 00:20:43.054 { 00:20:43.054 "name": "BaseBdev2", 00:20:43.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.054 "is_configured": false, 00:20:43.054 "data_offset": 0, 00:20:43.054 "data_size": 0 00:20:43.054 }, 00:20:43.054 { 00:20:43.054 "name": "BaseBdev3", 00:20:43.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.054 "is_configured": false, 00:20:43.054 "data_offset": 0, 00:20:43.054 "data_size": 0 00:20:43.054 } 00:20:43.054 ] 00:20:43.054 }' 00:20:43.054 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:43.054 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.314 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:43.314 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.314 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.314 [2024-12-05 12:52:25.803666] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:43.314 [2024-12-05 12:52:25.803701] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:43.314 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.314 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:43.314 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.314 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.314 [2024-12-05 12:52:25.811679] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:43.314 [2024-12-05 12:52:25.811719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:43.314 [2024-12-05 12:52:25.811727] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:43.314 [2024-12-05 12:52:25.811736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:43.314 [2024-12-05 12:52:25.811742] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:43.315 [2024-12-05 12:52:25.811750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:43.315 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.315 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:43.315 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.315 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.315 [2024-12-05 12:52:25.843820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:43.315 BaseBdev1 00:20:43.315 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.315 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:43.315 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:43.315 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:43.315 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:43.315 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:43.315 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:43.315 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:43.315 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.315 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.315 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.315 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:43.315 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.315 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.315 [ 00:20:43.315 { 00:20:43.315 "name": "BaseBdev1", 00:20:43.315 "aliases": [ 00:20:43.315 "6efe17c7-3db1-4890-b884-08dcc988c665" 00:20:43.315 ], 00:20:43.315 "product_name": "Malloc disk", 00:20:43.315 "block_size": 512, 00:20:43.315 "num_blocks": 65536, 00:20:43.315 "uuid": "6efe17c7-3db1-4890-b884-08dcc988c665", 00:20:43.315 "assigned_rate_limits": { 00:20:43.315 "rw_ios_per_sec": 0, 00:20:43.315 "rw_mbytes_per_sec": 0, 00:20:43.315 "r_mbytes_per_sec": 0, 00:20:43.315 "w_mbytes_per_sec": 0 00:20:43.315 }, 00:20:43.315 "claimed": true, 00:20:43.315 "claim_type": "exclusive_write", 00:20:43.315 "zoned": false, 00:20:43.315 "supported_io_types": { 00:20:43.315 "read": true, 00:20:43.315 "write": true, 00:20:43.315 "unmap": true, 00:20:43.315 "flush": true, 00:20:43.315 "reset": true, 00:20:43.315 "nvme_admin": false, 00:20:43.315 "nvme_io": false, 00:20:43.315 "nvme_io_md": false, 00:20:43.315 "write_zeroes": true, 00:20:43.315 "zcopy": true, 00:20:43.315 "get_zone_info": false, 00:20:43.315 "zone_management": false, 00:20:43.315 "zone_append": false, 00:20:43.315 "compare": false, 00:20:43.315 "compare_and_write": false, 00:20:43.315 "abort": true, 00:20:43.315 "seek_hole": false, 00:20:43.315 "seek_data": false, 00:20:43.315 "copy": true, 00:20:43.315 "nvme_iov_md": false 00:20:43.315 }, 00:20:43.315 "memory_domains": [ 00:20:43.315 { 00:20:43.315 "dma_device_id": "system", 00:20:43.315 "dma_device_type": 1 00:20:43.315 }, 00:20:43.315 { 00:20:43.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:43.315 "dma_device_type": 2 00:20:43.315 } 00:20:43.315 ], 00:20:43.315 "driver_specific": {} 00:20:43.315 } 00:20:43.315 ] 00:20:43.315 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.315 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:43.315 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:43.315 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:43.315 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:43.315 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:43.315 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:43.315 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:43.315 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:43.315 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:43.315 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:43.315 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:43.315 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.315 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.315 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.315 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:43.315 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.574 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:43.574 "name": "Existed_Raid", 00:20:43.574 "uuid": "4965ac43-b30d-4453-ad57-f7e46c053e5a", 00:20:43.574 "strip_size_kb": 64, 00:20:43.574 "state": "configuring", 00:20:43.574 "raid_level": "concat", 00:20:43.574 "superblock": true, 00:20:43.574 "num_base_bdevs": 3, 00:20:43.574 "num_base_bdevs_discovered": 1, 00:20:43.574 "num_base_bdevs_operational": 3, 00:20:43.574 "base_bdevs_list": [ 00:20:43.574 { 00:20:43.574 "name": "BaseBdev1", 00:20:43.574 "uuid": "6efe17c7-3db1-4890-b884-08dcc988c665", 00:20:43.574 "is_configured": true, 00:20:43.574 "data_offset": 2048, 00:20:43.574 "data_size": 63488 00:20:43.574 }, 00:20:43.574 { 00:20:43.574 "name": "BaseBdev2", 00:20:43.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.574 "is_configured": false, 00:20:43.574 "data_offset": 0, 00:20:43.574 "data_size": 0 00:20:43.574 }, 00:20:43.574 { 00:20:43.574 "name": "BaseBdev3", 00:20:43.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.574 "is_configured": false, 00:20:43.574 "data_offset": 0, 00:20:43.574 "data_size": 0 00:20:43.574 } 00:20:43.574 ] 00:20:43.574 }' 00:20:43.574 12:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:43.574 12:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.834 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:43.834 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.834 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.834 [2024-12-05 12:52:26.171944] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:43.834 [2024-12-05 12:52:26.171991] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:43.834 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.834 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:43.834 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.834 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.834 [2024-12-05 12:52:26.179992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:43.834 [2024-12-05 12:52:26.181798] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:43.834 [2024-12-05 12:52:26.181836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:43.834 [2024-12-05 12:52:26.181845] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:43.834 [2024-12-05 12:52:26.181854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:43.834 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.834 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:43.834 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:43.834 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:43.834 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:43.834 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:43.834 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:43.834 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:43.834 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:43.834 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:43.834 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:43.834 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:43.834 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:43.834 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.834 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.834 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:43.834 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.834 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.834 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:43.834 "name": "Existed_Raid", 00:20:43.834 "uuid": "ab52908c-3d11-47b6-8b41-6071f1fd543f", 00:20:43.834 "strip_size_kb": 64, 00:20:43.834 "state": "configuring", 00:20:43.834 "raid_level": "concat", 00:20:43.834 "superblock": true, 00:20:43.834 "num_base_bdevs": 3, 00:20:43.834 "num_base_bdevs_discovered": 1, 00:20:43.834 "num_base_bdevs_operational": 3, 00:20:43.834 "base_bdevs_list": [ 00:20:43.834 { 00:20:43.834 "name": "BaseBdev1", 00:20:43.834 "uuid": "6efe17c7-3db1-4890-b884-08dcc988c665", 00:20:43.834 "is_configured": true, 00:20:43.834 "data_offset": 2048, 00:20:43.834 "data_size": 63488 00:20:43.834 }, 00:20:43.834 { 00:20:43.834 "name": "BaseBdev2", 00:20:43.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.834 "is_configured": false, 00:20:43.834 "data_offset": 0, 00:20:43.834 "data_size": 0 00:20:43.834 }, 00:20:43.834 { 00:20:43.834 "name": "BaseBdev3", 00:20:43.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.834 "is_configured": false, 00:20:43.834 "data_offset": 0, 00:20:43.834 "data_size": 0 00:20:43.834 } 00:20:43.834 ] 00:20:43.834 }' 00:20:43.834 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:43.834 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.095 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:44.095 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.095 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.095 [2024-12-05 12:52:26.515099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:44.095 BaseBdev2 00:20:44.095 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.095 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:44.095 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:44.095 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:44.095 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:44.095 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:44.095 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:44.095 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:44.095 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.095 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.095 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.095 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:44.095 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.095 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.095 [ 00:20:44.095 { 00:20:44.095 "name": "BaseBdev2", 00:20:44.095 "aliases": [ 00:20:44.095 "e92e9554-6b3b-4fce-9e0d-102a3a947b61" 00:20:44.095 ], 00:20:44.095 "product_name": "Malloc disk", 00:20:44.095 "block_size": 512, 00:20:44.095 "num_blocks": 65536, 00:20:44.095 "uuid": "e92e9554-6b3b-4fce-9e0d-102a3a947b61", 00:20:44.095 "assigned_rate_limits": { 00:20:44.095 "rw_ios_per_sec": 0, 00:20:44.095 "rw_mbytes_per_sec": 0, 00:20:44.095 "r_mbytes_per_sec": 0, 00:20:44.095 "w_mbytes_per_sec": 0 00:20:44.095 }, 00:20:44.095 "claimed": true, 00:20:44.095 "claim_type": "exclusive_write", 00:20:44.095 "zoned": false, 00:20:44.095 "supported_io_types": { 00:20:44.095 "read": true, 00:20:44.095 "write": true, 00:20:44.095 "unmap": true, 00:20:44.095 "flush": true, 00:20:44.095 "reset": true, 00:20:44.095 "nvme_admin": false, 00:20:44.095 "nvme_io": false, 00:20:44.095 "nvme_io_md": false, 00:20:44.095 "write_zeroes": true, 00:20:44.095 "zcopy": true, 00:20:44.095 "get_zone_info": false, 00:20:44.095 "zone_management": false, 00:20:44.095 "zone_append": false, 00:20:44.095 "compare": false, 00:20:44.095 "compare_and_write": false, 00:20:44.095 "abort": true, 00:20:44.095 "seek_hole": false, 00:20:44.095 "seek_data": false, 00:20:44.095 "copy": true, 00:20:44.096 "nvme_iov_md": false 00:20:44.096 }, 00:20:44.096 "memory_domains": [ 00:20:44.096 { 00:20:44.096 "dma_device_id": "system", 00:20:44.096 "dma_device_type": 1 00:20:44.096 }, 00:20:44.096 { 00:20:44.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:44.096 "dma_device_type": 2 00:20:44.096 } 00:20:44.096 ], 00:20:44.096 "driver_specific": {} 00:20:44.096 } 00:20:44.096 ] 00:20:44.096 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.096 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:44.096 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:44.096 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:44.096 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:44.096 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:44.096 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:44.096 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:44.096 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:44.096 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:44.096 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:44.096 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:44.096 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:44.096 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:44.096 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:44.096 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:44.096 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.096 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.096 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.096 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:44.096 "name": "Existed_Raid", 00:20:44.096 "uuid": "ab52908c-3d11-47b6-8b41-6071f1fd543f", 00:20:44.096 "strip_size_kb": 64, 00:20:44.096 "state": "configuring", 00:20:44.096 "raid_level": "concat", 00:20:44.096 "superblock": true, 00:20:44.096 "num_base_bdevs": 3, 00:20:44.096 "num_base_bdevs_discovered": 2, 00:20:44.096 "num_base_bdevs_operational": 3, 00:20:44.096 "base_bdevs_list": [ 00:20:44.096 { 00:20:44.096 "name": "BaseBdev1", 00:20:44.096 "uuid": "6efe17c7-3db1-4890-b884-08dcc988c665", 00:20:44.096 "is_configured": true, 00:20:44.096 "data_offset": 2048, 00:20:44.096 "data_size": 63488 00:20:44.096 }, 00:20:44.096 { 00:20:44.096 "name": "BaseBdev2", 00:20:44.096 "uuid": "e92e9554-6b3b-4fce-9e0d-102a3a947b61", 00:20:44.096 "is_configured": true, 00:20:44.096 "data_offset": 2048, 00:20:44.096 "data_size": 63488 00:20:44.096 }, 00:20:44.096 { 00:20:44.096 "name": "BaseBdev3", 00:20:44.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:44.096 "is_configured": false, 00:20:44.096 "data_offset": 0, 00:20:44.096 "data_size": 0 00:20:44.096 } 00:20:44.096 ] 00:20:44.096 }' 00:20:44.096 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:44.096 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.357 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:44.357 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.357 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.357 [2024-12-05 12:52:26.907941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:44.357 [2024-12-05 12:52:26.908178] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:44.357 [2024-12-05 12:52:26.908197] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:44.357 [2024-12-05 12:52:26.908450] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:44.357 BaseBdev3 00:20:44.357 [2024-12-05 12:52:26.908605] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:44.357 [2024-12-05 12:52:26.908615] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:44.357 [2024-12-05 12:52:26.908744] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:44.357 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.357 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:20:44.357 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:44.357 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:44.357 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:44.357 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:44.357 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:44.357 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:44.357 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.357 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.357 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.357 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:44.357 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.357 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.357 [ 00:20:44.357 { 00:20:44.357 "name": "BaseBdev3", 00:20:44.357 "aliases": [ 00:20:44.357 "506c2c00-c0ea-46c5-a29d-fc7ec6f1552b" 00:20:44.357 ], 00:20:44.357 "product_name": "Malloc disk", 00:20:44.357 "block_size": 512, 00:20:44.357 "num_blocks": 65536, 00:20:44.357 "uuid": "506c2c00-c0ea-46c5-a29d-fc7ec6f1552b", 00:20:44.357 "assigned_rate_limits": { 00:20:44.357 "rw_ios_per_sec": 0, 00:20:44.357 "rw_mbytes_per_sec": 0, 00:20:44.357 "r_mbytes_per_sec": 0, 00:20:44.357 "w_mbytes_per_sec": 0 00:20:44.357 }, 00:20:44.357 "claimed": true, 00:20:44.357 "claim_type": "exclusive_write", 00:20:44.357 "zoned": false, 00:20:44.357 "supported_io_types": { 00:20:44.357 "read": true, 00:20:44.357 "write": true, 00:20:44.357 "unmap": true, 00:20:44.357 "flush": true, 00:20:44.357 "reset": true, 00:20:44.357 "nvme_admin": false, 00:20:44.357 "nvme_io": false, 00:20:44.357 "nvme_io_md": false, 00:20:44.357 "write_zeroes": true, 00:20:44.357 "zcopy": true, 00:20:44.357 "get_zone_info": false, 00:20:44.357 "zone_management": false, 00:20:44.357 "zone_append": false, 00:20:44.357 "compare": false, 00:20:44.357 "compare_and_write": false, 00:20:44.357 "abort": true, 00:20:44.357 "seek_hole": false, 00:20:44.357 "seek_data": false, 00:20:44.357 "copy": true, 00:20:44.357 "nvme_iov_md": false 00:20:44.357 }, 00:20:44.357 "memory_domains": [ 00:20:44.357 { 00:20:44.357 "dma_device_id": "system", 00:20:44.357 "dma_device_type": 1 00:20:44.357 }, 00:20:44.357 { 00:20:44.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:44.357 "dma_device_type": 2 00:20:44.357 } 00:20:44.357 ], 00:20:44.357 "driver_specific": {} 00:20:44.357 } 00:20:44.357 ] 00:20:44.357 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.357 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:44.357 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:44.357 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:44.357 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:20:44.357 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:44.357 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:44.357 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:44.357 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:44.357 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:44.357 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:44.357 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:44.357 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:44.357 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:44.357 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:44.357 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:44.357 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.618 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.618 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.618 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:44.618 "name": "Existed_Raid", 00:20:44.618 "uuid": "ab52908c-3d11-47b6-8b41-6071f1fd543f", 00:20:44.618 "strip_size_kb": 64, 00:20:44.618 "state": "online", 00:20:44.618 "raid_level": "concat", 00:20:44.618 "superblock": true, 00:20:44.618 "num_base_bdevs": 3, 00:20:44.618 "num_base_bdevs_discovered": 3, 00:20:44.618 "num_base_bdevs_operational": 3, 00:20:44.618 "base_bdevs_list": [ 00:20:44.618 { 00:20:44.618 "name": "BaseBdev1", 00:20:44.618 "uuid": "6efe17c7-3db1-4890-b884-08dcc988c665", 00:20:44.618 "is_configured": true, 00:20:44.618 "data_offset": 2048, 00:20:44.618 "data_size": 63488 00:20:44.618 }, 00:20:44.618 { 00:20:44.618 "name": "BaseBdev2", 00:20:44.618 "uuid": "e92e9554-6b3b-4fce-9e0d-102a3a947b61", 00:20:44.618 "is_configured": true, 00:20:44.618 "data_offset": 2048, 00:20:44.618 "data_size": 63488 00:20:44.618 }, 00:20:44.618 { 00:20:44.618 "name": "BaseBdev3", 00:20:44.618 "uuid": "506c2c00-c0ea-46c5-a29d-fc7ec6f1552b", 00:20:44.618 "is_configured": true, 00:20:44.618 "data_offset": 2048, 00:20:44.618 "data_size": 63488 00:20:44.618 } 00:20:44.618 ] 00:20:44.618 }' 00:20:44.618 12:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:44.618 12:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.877 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:44.877 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:44.877 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:44.877 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:44.877 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:20:44.877 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:44.877 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:44.877 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.877 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:44.877 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.877 [2024-12-05 12:52:27.256386] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:44.877 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.877 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:44.877 "name": "Existed_Raid", 00:20:44.877 "aliases": [ 00:20:44.877 "ab52908c-3d11-47b6-8b41-6071f1fd543f" 00:20:44.877 ], 00:20:44.877 "product_name": "Raid Volume", 00:20:44.877 "block_size": 512, 00:20:44.877 "num_blocks": 190464, 00:20:44.877 "uuid": "ab52908c-3d11-47b6-8b41-6071f1fd543f", 00:20:44.877 "assigned_rate_limits": { 00:20:44.877 "rw_ios_per_sec": 0, 00:20:44.877 "rw_mbytes_per_sec": 0, 00:20:44.877 "r_mbytes_per_sec": 0, 00:20:44.877 "w_mbytes_per_sec": 0 00:20:44.877 }, 00:20:44.877 "claimed": false, 00:20:44.877 "zoned": false, 00:20:44.877 "supported_io_types": { 00:20:44.877 "read": true, 00:20:44.877 "write": true, 00:20:44.877 "unmap": true, 00:20:44.877 "flush": true, 00:20:44.877 "reset": true, 00:20:44.877 "nvme_admin": false, 00:20:44.877 "nvme_io": false, 00:20:44.877 "nvme_io_md": false, 00:20:44.877 "write_zeroes": true, 00:20:44.877 "zcopy": false, 00:20:44.877 "get_zone_info": false, 00:20:44.877 "zone_management": false, 00:20:44.877 "zone_append": false, 00:20:44.877 "compare": false, 00:20:44.877 "compare_and_write": false, 00:20:44.877 "abort": false, 00:20:44.877 "seek_hole": false, 00:20:44.877 "seek_data": false, 00:20:44.877 "copy": false, 00:20:44.877 "nvme_iov_md": false 00:20:44.877 }, 00:20:44.877 "memory_domains": [ 00:20:44.877 { 00:20:44.877 "dma_device_id": "system", 00:20:44.877 "dma_device_type": 1 00:20:44.877 }, 00:20:44.877 { 00:20:44.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:44.877 "dma_device_type": 2 00:20:44.877 }, 00:20:44.878 { 00:20:44.878 "dma_device_id": "system", 00:20:44.878 "dma_device_type": 1 00:20:44.878 }, 00:20:44.878 { 00:20:44.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:44.878 "dma_device_type": 2 00:20:44.878 }, 00:20:44.878 { 00:20:44.878 "dma_device_id": "system", 00:20:44.878 "dma_device_type": 1 00:20:44.878 }, 00:20:44.878 { 00:20:44.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:44.878 "dma_device_type": 2 00:20:44.878 } 00:20:44.878 ], 00:20:44.878 "driver_specific": { 00:20:44.878 "raid": { 00:20:44.878 "uuid": "ab52908c-3d11-47b6-8b41-6071f1fd543f", 00:20:44.878 "strip_size_kb": 64, 00:20:44.878 "state": "online", 00:20:44.878 "raid_level": "concat", 00:20:44.878 "superblock": true, 00:20:44.878 "num_base_bdevs": 3, 00:20:44.878 "num_base_bdevs_discovered": 3, 00:20:44.878 "num_base_bdevs_operational": 3, 00:20:44.878 "base_bdevs_list": [ 00:20:44.878 { 00:20:44.878 "name": "BaseBdev1", 00:20:44.878 "uuid": "6efe17c7-3db1-4890-b884-08dcc988c665", 00:20:44.878 "is_configured": true, 00:20:44.878 "data_offset": 2048, 00:20:44.878 "data_size": 63488 00:20:44.878 }, 00:20:44.878 { 00:20:44.878 "name": "BaseBdev2", 00:20:44.878 "uuid": "e92e9554-6b3b-4fce-9e0d-102a3a947b61", 00:20:44.878 "is_configured": true, 00:20:44.878 "data_offset": 2048, 00:20:44.878 "data_size": 63488 00:20:44.878 }, 00:20:44.878 { 00:20:44.878 "name": "BaseBdev3", 00:20:44.878 "uuid": "506c2c00-c0ea-46c5-a29d-fc7ec6f1552b", 00:20:44.878 "is_configured": true, 00:20:44.878 "data_offset": 2048, 00:20:44.878 "data_size": 63488 00:20:44.878 } 00:20:44.878 ] 00:20:44.878 } 00:20:44.878 } 00:20:44.878 }' 00:20:44.878 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:44.878 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:44.878 BaseBdev2 00:20:44.878 BaseBdev3' 00:20:44.878 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:44.878 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:44.878 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:44.878 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:44.878 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.878 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.878 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:44.878 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.878 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:44.878 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:44.878 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:44.878 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:44.878 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:44.878 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.878 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.878 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.878 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:44.878 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:44.878 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:44.878 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:44.878 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.878 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.878 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:44.878 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.878 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:44.878 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:44.878 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:44.878 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.878 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.878 [2024-12-05 12:52:27.448142] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:44.878 [2024-12-05 12:52:27.448168] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:44.878 [2024-12-05 12:52:27.448218] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:45.138 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.138 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:45.138 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:20:45.138 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:45.138 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:20:45.138 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:20:45.138 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:20:45.138 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:45.138 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:20:45.138 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:45.138 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:45.138 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:45.138 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:45.138 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:45.138 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:45.138 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:45.138 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.138 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:45.138 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.138 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.138 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.138 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:45.138 "name": "Existed_Raid", 00:20:45.138 "uuid": "ab52908c-3d11-47b6-8b41-6071f1fd543f", 00:20:45.138 "strip_size_kb": 64, 00:20:45.138 "state": "offline", 00:20:45.138 "raid_level": "concat", 00:20:45.138 "superblock": true, 00:20:45.138 "num_base_bdevs": 3, 00:20:45.138 "num_base_bdevs_discovered": 2, 00:20:45.138 "num_base_bdevs_operational": 2, 00:20:45.138 "base_bdevs_list": [ 00:20:45.138 { 00:20:45.138 "name": null, 00:20:45.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.138 "is_configured": false, 00:20:45.138 "data_offset": 0, 00:20:45.138 "data_size": 63488 00:20:45.138 }, 00:20:45.139 { 00:20:45.139 "name": "BaseBdev2", 00:20:45.139 "uuid": "e92e9554-6b3b-4fce-9e0d-102a3a947b61", 00:20:45.139 "is_configured": true, 00:20:45.139 "data_offset": 2048, 00:20:45.139 "data_size": 63488 00:20:45.139 }, 00:20:45.139 { 00:20:45.139 "name": "BaseBdev3", 00:20:45.139 "uuid": "506c2c00-c0ea-46c5-a29d-fc7ec6f1552b", 00:20:45.139 "is_configured": true, 00:20:45.139 "data_offset": 2048, 00:20:45.139 "data_size": 63488 00:20:45.139 } 00:20:45.139 ] 00:20:45.139 }' 00:20:45.139 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:45.139 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.400 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:45.400 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:45.400 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:45.400 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.400 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.400 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.400 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.400 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:45.400 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:45.400 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:45.400 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.400 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.400 [2024-12-05 12:52:27.856165] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:45.400 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.400 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:45.400 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:45.400 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.400 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.400 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:45.400 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.400 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.400 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:45.400 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:45.400 12:52:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:20:45.400 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.400 12:52:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.400 [2024-12-05 12:52:27.951049] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:45.400 [2024-12-05 12:52:27.951204] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.662 BaseBdev2 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.662 [ 00:20:45.662 { 00:20:45.662 "name": "BaseBdev2", 00:20:45.662 "aliases": [ 00:20:45.662 "0628b237-eb64-4db8-86a3-aa2f232bb02b" 00:20:45.662 ], 00:20:45.662 "product_name": "Malloc disk", 00:20:45.662 "block_size": 512, 00:20:45.662 "num_blocks": 65536, 00:20:45.662 "uuid": "0628b237-eb64-4db8-86a3-aa2f232bb02b", 00:20:45.662 "assigned_rate_limits": { 00:20:45.662 "rw_ios_per_sec": 0, 00:20:45.662 "rw_mbytes_per_sec": 0, 00:20:45.662 "r_mbytes_per_sec": 0, 00:20:45.662 "w_mbytes_per_sec": 0 00:20:45.662 }, 00:20:45.662 "claimed": false, 00:20:45.662 "zoned": false, 00:20:45.662 "supported_io_types": { 00:20:45.662 "read": true, 00:20:45.662 "write": true, 00:20:45.662 "unmap": true, 00:20:45.662 "flush": true, 00:20:45.662 "reset": true, 00:20:45.662 "nvme_admin": false, 00:20:45.662 "nvme_io": false, 00:20:45.662 "nvme_io_md": false, 00:20:45.662 "write_zeroes": true, 00:20:45.662 "zcopy": true, 00:20:45.662 "get_zone_info": false, 00:20:45.662 "zone_management": false, 00:20:45.662 "zone_append": false, 00:20:45.662 "compare": false, 00:20:45.662 "compare_and_write": false, 00:20:45.662 "abort": true, 00:20:45.662 "seek_hole": false, 00:20:45.662 "seek_data": false, 00:20:45.662 "copy": true, 00:20:45.662 "nvme_iov_md": false 00:20:45.662 }, 00:20:45.662 "memory_domains": [ 00:20:45.662 { 00:20:45.662 "dma_device_id": "system", 00:20:45.662 "dma_device_type": 1 00:20:45.662 }, 00:20:45.662 { 00:20:45.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:45.662 "dma_device_type": 2 00:20:45.662 } 00:20:45.662 ], 00:20:45.662 "driver_specific": {} 00:20:45.662 } 00:20:45.662 ] 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.662 BaseBdev3 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.662 [ 00:20:45.662 { 00:20:45.662 "name": "BaseBdev3", 00:20:45.662 "aliases": [ 00:20:45.662 "35408d68-116d-48f0-a374-c10a502c1e92" 00:20:45.662 ], 00:20:45.662 "product_name": "Malloc disk", 00:20:45.662 "block_size": 512, 00:20:45.662 "num_blocks": 65536, 00:20:45.662 "uuid": "35408d68-116d-48f0-a374-c10a502c1e92", 00:20:45.662 "assigned_rate_limits": { 00:20:45.662 "rw_ios_per_sec": 0, 00:20:45.662 "rw_mbytes_per_sec": 0, 00:20:45.662 "r_mbytes_per_sec": 0, 00:20:45.662 "w_mbytes_per_sec": 0 00:20:45.662 }, 00:20:45.662 "claimed": false, 00:20:45.662 "zoned": false, 00:20:45.662 "supported_io_types": { 00:20:45.662 "read": true, 00:20:45.662 "write": true, 00:20:45.662 "unmap": true, 00:20:45.662 "flush": true, 00:20:45.662 "reset": true, 00:20:45.662 "nvme_admin": false, 00:20:45.662 "nvme_io": false, 00:20:45.662 "nvme_io_md": false, 00:20:45.662 "write_zeroes": true, 00:20:45.662 "zcopy": true, 00:20:45.662 "get_zone_info": false, 00:20:45.662 "zone_management": false, 00:20:45.662 "zone_append": false, 00:20:45.662 "compare": false, 00:20:45.662 "compare_and_write": false, 00:20:45.662 "abort": true, 00:20:45.662 "seek_hole": false, 00:20:45.662 "seek_data": false, 00:20:45.662 "copy": true, 00:20:45.662 "nvme_iov_md": false 00:20:45.662 }, 00:20:45.662 "memory_domains": [ 00:20:45.662 { 00:20:45.662 "dma_device_id": "system", 00:20:45.662 "dma_device_type": 1 00:20:45.662 }, 00:20:45.662 { 00:20:45.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:45.662 "dma_device_type": 2 00:20:45.662 } 00:20:45.662 ], 00:20:45.662 "driver_specific": {} 00:20:45.662 } 00:20:45.662 ] 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.662 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.662 [2024-12-05 12:52:28.159014] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:45.662 [2024-12-05 12:52:28.159062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:45.662 [2024-12-05 12:52:28.159084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:45.663 [2024-12-05 12:52:28.160890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:45.663 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.663 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:45.663 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:45.663 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:45.663 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:45.663 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:45.663 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:45.663 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:45.663 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:45.663 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:45.663 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:45.663 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.663 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.663 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.663 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:45.663 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.663 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:45.663 "name": "Existed_Raid", 00:20:45.663 "uuid": "add32123-6661-48f0-b9dd-83243e9fa7fc", 00:20:45.663 "strip_size_kb": 64, 00:20:45.663 "state": "configuring", 00:20:45.663 "raid_level": "concat", 00:20:45.663 "superblock": true, 00:20:45.663 "num_base_bdevs": 3, 00:20:45.663 "num_base_bdevs_discovered": 2, 00:20:45.663 "num_base_bdevs_operational": 3, 00:20:45.663 "base_bdevs_list": [ 00:20:45.663 { 00:20:45.663 "name": "BaseBdev1", 00:20:45.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.663 "is_configured": false, 00:20:45.663 "data_offset": 0, 00:20:45.663 "data_size": 0 00:20:45.663 }, 00:20:45.663 { 00:20:45.663 "name": "BaseBdev2", 00:20:45.663 "uuid": "0628b237-eb64-4db8-86a3-aa2f232bb02b", 00:20:45.663 "is_configured": true, 00:20:45.663 "data_offset": 2048, 00:20:45.663 "data_size": 63488 00:20:45.663 }, 00:20:45.663 { 00:20:45.663 "name": "BaseBdev3", 00:20:45.663 "uuid": "35408d68-116d-48f0-a374-c10a502c1e92", 00:20:45.663 "is_configured": true, 00:20:45.663 "data_offset": 2048, 00:20:45.663 "data_size": 63488 00:20:45.663 } 00:20:45.663 ] 00:20:45.663 }' 00:20:45.663 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:45.663 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.232 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:46.232 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.232 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.232 [2024-12-05 12:52:28.511111] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:46.232 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.232 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:46.232 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:46.232 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:46.232 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:46.232 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:46.232 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:46.232 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:46.232 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:46.232 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:46.232 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:46.232 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:46.232 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.232 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.232 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.232 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.232 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:46.232 "name": "Existed_Raid", 00:20:46.232 "uuid": "add32123-6661-48f0-b9dd-83243e9fa7fc", 00:20:46.232 "strip_size_kb": 64, 00:20:46.232 "state": "configuring", 00:20:46.232 "raid_level": "concat", 00:20:46.232 "superblock": true, 00:20:46.232 "num_base_bdevs": 3, 00:20:46.232 "num_base_bdevs_discovered": 1, 00:20:46.232 "num_base_bdevs_operational": 3, 00:20:46.232 "base_bdevs_list": [ 00:20:46.232 { 00:20:46.232 "name": "BaseBdev1", 00:20:46.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.232 "is_configured": false, 00:20:46.232 "data_offset": 0, 00:20:46.232 "data_size": 0 00:20:46.232 }, 00:20:46.232 { 00:20:46.232 "name": null, 00:20:46.232 "uuid": "0628b237-eb64-4db8-86a3-aa2f232bb02b", 00:20:46.232 "is_configured": false, 00:20:46.232 "data_offset": 0, 00:20:46.232 "data_size": 63488 00:20:46.232 }, 00:20:46.232 { 00:20:46.232 "name": "BaseBdev3", 00:20:46.232 "uuid": "35408d68-116d-48f0-a374-c10a502c1e92", 00:20:46.232 "is_configured": true, 00:20:46.232 "data_offset": 2048, 00:20:46.232 "data_size": 63488 00:20:46.232 } 00:20:46.232 ] 00:20:46.232 }' 00:20:46.232 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:46.232 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.492 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.492 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.492 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.492 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:46.492 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.492 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:20:46.492 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:46.492 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.492 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.492 [2024-12-05 12:52:28.874300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:46.492 BaseBdev1 00:20:46.492 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.492 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:20:46.492 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:46.492 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:46.492 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:46.492 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:46.492 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:46.492 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:46.492 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.492 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.492 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.492 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:46.492 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.492 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.492 [ 00:20:46.492 { 00:20:46.492 "name": "BaseBdev1", 00:20:46.492 "aliases": [ 00:20:46.492 "07b2bb38-ad7a-483e-8c15-fb80d9e41252" 00:20:46.492 ], 00:20:46.492 "product_name": "Malloc disk", 00:20:46.492 "block_size": 512, 00:20:46.492 "num_blocks": 65536, 00:20:46.492 "uuid": "07b2bb38-ad7a-483e-8c15-fb80d9e41252", 00:20:46.492 "assigned_rate_limits": { 00:20:46.492 "rw_ios_per_sec": 0, 00:20:46.492 "rw_mbytes_per_sec": 0, 00:20:46.492 "r_mbytes_per_sec": 0, 00:20:46.492 "w_mbytes_per_sec": 0 00:20:46.492 }, 00:20:46.492 "claimed": true, 00:20:46.492 "claim_type": "exclusive_write", 00:20:46.492 "zoned": false, 00:20:46.492 "supported_io_types": { 00:20:46.492 "read": true, 00:20:46.492 "write": true, 00:20:46.492 "unmap": true, 00:20:46.492 "flush": true, 00:20:46.492 "reset": true, 00:20:46.492 "nvme_admin": false, 00:20:46.492 "nvme_io": false, 00:20:46.492 "nvme_io_md": false, 00:20:46.492 "write_zeroes": true, 00:20:46.492 "zcopy": true, 00:20:46.492 "get_zone_info": false, 00:20:46.492 "zone_management": false, 00:20:46.492 "zone_append": false, 00:20:46.492 "compare": false, 00:20:46.492 "compare_and_write": false, 00:20:46.492 "abort": true, 00:20:46.492 "seek_hole": false, 00:20:46.492 "seek_data": false, 00:20:46.492 "copy": true, 00:20:46.492 "nvme_iov_md": false 00:20:46.492 }, 00:20:46.492 "memory_domains": [ 00:20:46.492 { 00:20:46.492 "dma_device_id": "system", 00:20:46.492 "dma_device_type": 1 00:20:46.492 }, 00:20:46.492 { 00:20:46.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:46.492 "dma_device_type": 2 00:20:46.492 } 00:20:46.492 ], 00:20:46.492 "driver_specific": {} 00:20:46.492 } 00:20:46.492 ] 00:20:46.492 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.492 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:46.492 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:46.493 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:46.493 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:46.493 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:46.493 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:46.493 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:46.493 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:46.493 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:46.493 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:46.493 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:46.493 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.493 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:46.493 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.493 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.493 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.493 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:46.493 "name": "Existed_Raid", 00:20:46.493 "uuid": "add32123-6661-48f0-b9dd-83243e9fa7fc", 00:20:46.493 "strip_size_kb": 64, 00:20:46.493 "state": "configuring", 00:20:46.493 "raid_level": "concat", 00:20:46.493 "superblock": true, 00:20:46.493 "num_base_bdevs": 3, 00:20:46.493 "num_base_bdevs_discovered": 2, 00:20:46.493 "num_base_bdevs_operational": 3, 00:20:46.493 "base_bdevs_list": [ 00:20:46.493 { 00:20:46.493 "name": "BaseBdev1", 00:20:46.493 "uuid": "07b2bb38-ad7a-483e-8c15-fb80d9e41252", 00:20:46.493 "is_configured": true, 00:20:46.493 "data_offset": 2048, 00:20:46.493 "data_size": 63488 00:20:46.493 }, 00:20:46.493 { 00:20:46.493 "name": null, 00:20:46.493 "uuid": "0628b237-eb64-4db8-86a3-aa2f232bb02b", 00:20:46.493 "is_configured": false, 00:20:46.493 "data_offset": 0, 00:20:46.493 "data_size": 63488 00:20:46.493 }, 00:20:46.493 { 00:20:46.493 "name": "BaseBdev3", 00:20:46.493 "uuid": "35408d68-116d-48f0-a374-c10a502c1e92", 00:20:46.493 "is_configured": true, 00:20:46.493 "data_offset": 2048, 00:20:46.493 "data_size": 63488 00:20:46.493 } 00:20:46.493 ] 00:20:46.493 }' 00:20:46.493 12:52:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:46.493 12:52:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.752 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:46.752 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.752 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.752 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.752 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.752 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:20:46.752 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:20:46.752 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.752 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.752 [2024-12-05 12:52:29.254444] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:46.752 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.752 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:46.752 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:46.752 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:46.752 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:46.752 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:46.752 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:46.752 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:46.752 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:46.752 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:46.752 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:46.753 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.753 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.753 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.753 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:46.753 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.753 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:46.753 "name": "Existed_Raid", 00:20:46.753 "uuid": "add32123-6661-48f0-b9dd-83243e9fa7fc", 00:20:46.753 "strip_size_kb": 64, 00:20:46.753 "state": "configuring", 00:20:46.753 "raid_level": "concat", 00:20:46.753 "superblock": true, 00:20:46.753 "num_base_bdevs": 3, 00:20:46.753 "num_base_bdevs_discovered": 1, 00:20:46.753 "num_base_bdevs_operational": 3, 00:20:46.753 "base_bdevs_list": [ 00:20:46.753 { 00:20:46.753 "name": "BaseBdev1", 00:20:46.753 "uuid": "07b2bb38-ad7a-483e-8c15-fb80d9e41252", 00:20:46.753 "is_configured": true, 00:20:46.753 "data_offset": 2048, 00:20:46.753 "data_size": 63488 00:20:46.753 }, 00:20:46.753 { 00:20:46.753 "name": null, 00:20:46.753 "uuid": "0628b237-eb64-4db8-86a3-aa2f232bb02b", 00:20:46.753 "is_configured": false, 00:20:46.753 "data_offset": 0, 00:20:46.753 "data_size": 63488 00:20:46.753 }, 00:20:46.753 { 00:20:46.753 "name": null, 00:20:46.753 "uuid": "35408d68-116d-48f0-a374-c10a502c1e92", 00:20:46.753 "is_configured": false, 00:20:46.753 "data_offset": 0, 00:20:46.753 "data_size": 63488 00:20:46.753 } 00:20:46.753 ] 00:20:46.753 }' 00:20:46.753 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:46.753 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.054 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.054 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.054 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.054 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:47.054 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.054 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:20:47.054 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:47.054 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.054 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.054 [2024-12-05 12:52:29.622572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:47.317 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.317 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:47.317 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:47.317 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:47.317 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:47.317 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:47.317 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:47.317 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:47.317 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:47.317 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:47.317 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:47.317 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:47.317 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.317 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.317 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.317 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.317 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:47.317 "name": "Existed_Raid", 00:20:47.317 "uuid": "add32123-6661-48f0-b9dd-83243e9fa7fc", 00:20:47.317 "strip_size_kb": 64, 00:20:47.317 "state": "configuring", 00:20:47.317 "raid_level": "concat", 00:20:47.317 "superblock": true, 00:20:47.317 "num_base_bdevs": 3, 00:20:47.317 "num_base_bdevs_discovered": 2, 00:20:47.317 "num_base_bdevs_operational": 3, 00:20:47.317 "base_bdevs_list": [ 00:20:47.317 { 00:20:47.317 "name": "BaseBdev1", 00:20:47.317 "uuid": "07b2bb38-ad7a-483e-8c15-fb80d9e41252", 00:20:47.317 "is_configured": true, 00:20:47.317 "data_offset": 2048, 00:20:47.317 "data_size": 63488 00:20:47.317 }, 00:20:47.318 { 00:20:47.318 "name": null, 00:20:47.318 "uuid": "0628b237-eb64-4db8-86a3-aa2f232bb02b", 00:20:47.318 "is_configured": false, 00:20:47.318 "data_offset": 0, 00:20:47.318 "data_size": 63488 00:20:47.318 }, 00:20:47.318 { 00:20:47.318 "name": "BaseBdev3", 00:20:47.318 "uuid": "35408d68-116d-48f0-a374-c10a502c1e92", 00:20:47.318 "is_configured": true, 00:20:47.318 "data_offset": 2048, 00:20:47.318 "data_size": 63488 00:20:47.318 } 00:20:47.318 ] 00:20:47.318 }' 00:20:47.318 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:47.318 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.578 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:47.578 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.578 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.578 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.578 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.578 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:20:47.578 12:52:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:47.578 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.578 12:52:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.578 [2024-12-05 12:52:29.978663] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:47.578 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.578 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:47.578 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:47.578 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:47.578 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:47.578 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:47.578 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:47.578 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:47.578 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:47.578 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:47.578 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:47.578 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.578 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.578 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.578 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:47.578 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.578 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:47.578 "name": "Existed_Raid", 00:20:47.578 "uuid": "add32123-6661-48f0-b9dd-83243e9fa7fc", 00:20:47.578 "strip_size_kb": 64, 00:20:47.578 "state": "configuring", 00:20:47.578 "raid_level": "concat", 00:20:47.578 "superblock": true, 00:20:47.578 "num_base_bdevs": 3, 00:20:47.578 "num_base_bdevs_discovered": 1, 00:20:47.578 "num_base_bdevs_operational": 3, 00:20:47.578 "base_bdevs_list": [ 00:20:47.578 { 00:20:47.578 "name": null, 00:20:47.578 "uuid": "07b2bb38-ad7a-483e-8c15-fb80d9e41252", 00:20:47.578 "is_configured": false, 00:20:47.578 "data_offset": 0, 00:20:47.578 "data_size": 63488 00:20:47.578 }, 00:20:47.578 { 00:20:47.578 "name": null, 00:20:47.578 "uuid": "0628b237-eb64-4db8-86a3-aa2f232bb02b", 00:20:47.578 "is_configured": false, 00:20:47.578 "data_offset": 0, 00:20:47.578 "data_size": 63488 00:20:47.578 }, 00:20:47.578 { 00:20:47.578 "name": "BaseBdev3", 00:20:47.578 "uuid": "35408d68-116d-48f0-a374-c10a502c1e92", 00:20:47.578 "is_configured": true, 00:20:47.578 "data_offset": 2048, 00:20:47.578 "data_size": 63488 00:20:47.578 } 00:20:47.578 ] 00:20:47.578 }' 00:20:47.578 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:47.578 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.838 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.838 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:47.838 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.838 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.838 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.838 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:20:47.838 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:47.838 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.838 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.838 [2024-12-05 12:52:30.374359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:47.838 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.838 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:20:47.838 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:47.838 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:47.838 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:47.838 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:47.839 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:47.839 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:47.839 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:47.839 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:47.839 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:47.839 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.839 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:47.839 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.839 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.839 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.839 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:47.839 "name": "Existed_Raid", 00:20:47.839 "uuid": "add32123-6661-48f0-b9dd-83243e9fa7fc", 00:20:47.839 "strip_size_kb": 64, 00:20:47.839 "state": "configuring", 00:20:47.839 "raid_level": "concat", 00:20:47.839 "superblock": true, 00:20:47.839 "num_base_bdevs": 3, 00:20:47.839 "num_base_bdevs_discovered": 2, 00:20:47.839 "num_base_bdevs_operational": 3, 00:20:47.839 "base_bdevs_list": [ 00:20:47.839 { 00:20:47.839 "name": null, 00:20:47.839 "uuid": "07b2bb38-ad7a-483e-8c15-fb80d9e41252", 00:20:47.839 "is_configured": false, 00:20:47.839 "data_offset": 0, 00:20:47.839 "data_size": 63488 00:20:47.839 }, 00:20:47.839 { 00:20:47.839 "name": "BaseBdev2", 00:20:47.839 "uuid": "0628b237-eb64-4db8-86a3-aa2f232bb02b", 00:20:47.839 "is_configured": true, 00:20:47.839 "data_offset": 2048, 00:20:47.839 "data_size": 63488 00:20:47.839 }, 00:20:47.839 { 00:20:47.839 "name": "BaseBdev3", 00:20:47.839 "uuid": "35408d68-116d-48f0-a374-c10a502c1e92", 00:20:47.839 "is_configured": true, 00:20:47.839 "data_offset": 2048, 00:20:47.839 "data_size": 63488 00:20:47.839 } 00:20:47.839 ] 00:20:47.839 }' 00:20:47.839 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:47.839 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.412 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.412 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:48.412 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.412 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.412 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.412 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:48.412 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.412 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.412 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.412 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:48.412 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.412 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 07b2bb38-ad7a-483e-8c15-fb80d9e41252 00:20:48.412 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.412 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.412 [2024-12-05 12:52:30.781054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:48.412 [2024-12-05 12:52:30.781210] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:48.412 [2024-12-05 12:52:30.781223] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:48.412 [2024-12-05 12:52:30.781413] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:48.412 NewBaseBdev 00:20:48.412 [2024-12-05 12:52:30.781527] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:48.412 [2024-12-05 12:52:30.781534] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:20:48.412 [2024-12-05 12:52:30.781631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:48.412 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.412 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:20:48.412 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:20:48.412 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:48.412 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:48.412 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:48.412 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:48.412 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:48.412 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.412 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.412 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.412 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:48.412 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.412 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.412 [ 00:20:48.412 { 00:20:48.412 "name": "NewBaseBdev", 00:20:48.412 "aliases": [ 00:20:48.412 "07b2bb38-ad7a-483e-8c15-fb80d9e41252" 00:20:48.412 ], 00:20:48.412 "product_name": "Malloc disk", 00:20:48.412 "block_size": 512, 00:20:48.412 "num_blocks": 65536, 00:20:48.412 "uuid": "07b2bb38-ad7a-483e-8c15-fb80d9e41252", 00:20:48.412 "assigned_rate_limits": { 00:20:48.412 "rw_ios_per_sec": 0, 00:20:48.412 "rw_mbytes_per_sec": 0, 00:20:48.412 "r_mbytes_per_sec": 0, 00:20:48.412 "w_mbytes_per_sec": 0 00:20:48.412 }, 00:20:48.412 "claimed": true, 00:20:48.412 "claim_type": "exclusive_write", 00:20:48.412 "zoned": false, 00:20:48.412 "supported_io_types": { 00:20:48.412 "read": true, 00:20:48.412 "write": true, 00:20:48.412 "unmap": true, 00:20:48.412 "flush": true, 00:20:48.412 "reset": true, 00:20:48.412 "nvme_admin": false, 00:20:48.412 "nvme_io": false, 00:20:48.412 "nvme_io_md": false, 00:20:48.412 "write_zeroes": true, 00:20:48.412 "zcopy": true, 00:20:48.412 "get_zone_info": false, 00:20:48.412 "zone_management": false, 00:20:48.412 "zone_append": false, 00:20:48.412 "compare": false, 00:20:48.412 "compare_and_write": false, 00:20:48.412 "abort": true, 00:20:48.412 "seek_hole": false, 00:20:48.412 "seek_data": false, 00:20:48.412 "copy": true, 00:20:48.412 "nvme_iov_md": false 00:20:48.412 }, 00:20:48.412 "memory_domains": [ 00:20:48.412 { 00:20:48.412 "dma_device_id": "system", 00:20:48.412 "dma_device_type": 1 00:20:48.412 }, 00:20:48.412 { 00:20:48.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:48.412 "dma_device_type": 2 00:20:48.412 } 00:20:48.412 ], 00:20:48.412 "driver_specific": {} 00:20:48.412 } 00:20:48.412 ] 00:20:48.412 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.412 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:48.412 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:20:48.413 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:48.413 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:48.413 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:48.413 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:48.413 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:48.413 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:48.413 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:48.413 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:48.413 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:48.413 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.413 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:48.413 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.413 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.413 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.413 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:48.413 "name": "Existed_Raid", 00:20:48.413 "uuid": "add32123-6661-48f0-b9dd-83243e9fa7fc", 00:20:48.413 "strip_size_kb": 64, 00:20:48.413 "state": "online", 00:20:48.413 "raid_level": "concat", 00:20:48.413 "superblock": true, 00:20:48.413 "num_base_bdevs": 3, 00:20:48.413 "num_base_bdevs_discovered": 3, 00:20:48.413 "num_base_bdevs_operational": 3, 00:20:48.413 "base_bdevs_list": [ 00:20:48.413 { 00:20:48.413 "name": "NewBaseBdev", 00:20:48.413 "uuid": "07b2bb38-ad7a-483e-8c15-fb80d9e41252", 00:20:48.413 "is_configured": true, 00:20:48.413 "data_offset": 2048, 00:20:48.413 "data_size": 63488 00:20:48.413 }, 00:20:48.413 { 00:20:48.413 "name": "BaseBdev2", 00:20:48.413 "uuid": "0628b237-eb64-4db8-86a3-aa2f232bb02b", 00:20:48.413 "is_configured": true, 00:20:48.413 "data_offset": 2048, 00:20:48.413 "data_size": 63488 00:20:48.413 }, 00:20:48.413 { 00:20:48.413 "name": "BaseBdev3", 00:20:48.413 "uuid": "35408d68-116d-48f0-a374-c10a502c1e92", 00:20:48.413 "is_configured": true, 00:20:48.413 "data_offset": 2048, 00:20:48.413 "data_size": 63488 00:20:48.413 } 00:20:48.413 ] 00:20:48.413 }' 00:20:48.413 12:52:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:48.413 12:52:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.673 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:20:48.673 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:48.673 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:48.673 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:48.673 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:20:48.673 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:48.673 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:48.673 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:48.673 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.673 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.673 [2024-12-05 12:52:31.113421] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:48.673 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.673 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:48.673 "name": "Existed_Raid", 00:20:48.673 "aliases": [ 00:20:48.673 "add32123-6661-48f0-b9dd-83243e9fa7fc" 00:20:48.673 ], 00:20:48.673 "product_name": "Raid Volume", 00:20:48.673 "block_size": 512, 00:20:48.673 "num_blocks": 190464, 00:20:48.673 "uuid": "add32123-6661-48f0-b9dd-83243e9fa7fc", 00:20:48.673 "assigned_rate_limits": { 00:20:48.673 "rw_ios_per_sec": 0, 00:20:48.673 "rw_mbytes_per_sec": 0, 00:20:48.673 "r_mbytes_per_sec": 0, 00:20:48.673 "w_mbytes_per_sec": 0 00:20:48.673 }, 00:20:48.673 "claimed": false, 00:20:48.673 "zoned": false, 00:20:48.673 "supported_io_types": { 00:20:48.673 "read": true, 00:20:48.673 "write": true, 00:20:48.673 "unmap": true, 00:20:48.673 "flush": true, 00:20:48.673 "reset": true, 00:20:48.673 "nvme_admin": false, 00:20:48.673 "nvme_io": false, 00:20:48.673 "nvme_io_md": false, 00:20:48.673 "write_zeroes": true, 00:20:48.673 "zcopy": false, 00:20:48.673 "get_zone_info": false, 00:20:48.673 "zone_management": false, 00:20:48.673 "zone_append": false, 00:20:48.673 "compare": false, 00:20:48.673 "compare_and_write": false, 00:20:48.673 "abort": false, 00:20:48.673 "seek_hole": false, 00:20:48.673 "seek_data": false, 00:20:48.673 "copy": false, 00:20:48.673 "nvme_iov_md": false 00:20:48.673 }, 00:20:48.673 "memory_domains": [ 00:20:48.673 { 00:20:48.673 "dma_device_id": "system", 00:20:48.673 "dma_device_type": 1 00:20:48.673 }, 00:20:48.673 { 00:20:48.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:48.673 "dma_device_type": 2 00:20:48.673 }, 00:20:48.673 { 00:20:48.673 "dma_device_id": "system", 00:20:48.673 "dma_device_type": 1 00:20:48.673 }, 00:20:48.673 { 00:20:48.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:48.673 "dma_device_type": 2 00:20:48.673 }, 00:20:48.673 { 00:20:48.673 "dma_device_id": "system", 00:20:48.673 "dma_device_type": 1 00:20:48.673 }, 00:20:48.673 { 00:20:48.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:48.673 "dma_device_type": 2 00:20:48.673 } 00:20:48.673 ], 00:20:48.673 "driver_specific": { 00:20:48.673 "raid": { 00:20:48.673 "uuid": "add32123-6661-48f0-b9dd-83243e9fa7fc", 00:20:48.673 "strip_size_kb": 64, 00:20:48.673 "state": "online", 00:20:48.673 "raid_level": "concat", 00:20:48.673 "superblock": true, 00:20:48.673 "num_base_bdevs": 3, 00:20:48.673 "num_base_bdevs_discovered": 3, 00:20:48.673 "num_base_bdevs_operational": 3, 00:20:48.673 "base_bdevs_list": [ 00:20:48.673 { 00:20:48.673 "name": "NewBaseBdev", 00:20:48.673 "uuid": "07b2bb38-ad7a-483e-8c15-fb80d9e41252", 00:20:48.673 "is_configured": true, 00:20:48.673 "data_offset": 2048, 00:20:48.673 "data_size": 63488 00:20:48.673 }, 00:20:48.673 { 00:20:48.673 "name": "BaseBdev2", 00:20:48.673 "uuid": "0628b237-eb64-4db8-86a3-aa2f232bb02b", 00:20:48.673 "is_configured": true, 00:20:48.673 "data_offset": 2048, 00:20:48.673 "data_size": 63488 00:20:48.673 }, 00:20:48.673 { 00:20:48.673 "name": "BaseBdev3", 00:20:48.673 "uuid": "35408d68-116d-48f0-a374-c10a502c1e92", 00:20:48.673 "is_configured": true, 00:20:48.673 "data_offset": 2048, 00:20:48.673 "data_size": 63488 00:20:48.673 } 00:20:48.673 ] 00:20:48.673 } 00:20:48.673 } 00:20:48.673 }' 00:20:48.673 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:48.673 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:20:48.673 BaseBdev2 00:20:48.673 BaseBdev3' 00:20:48.673 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:48.673 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:48.673 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:48.673 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:20:48.673 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.673 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.673 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:48.673 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.673 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:48.673 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:48.673 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:48.673 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:48.673 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.673 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.673 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:48.673 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.933 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:48.933 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:48.933 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:48.933 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:48.933 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.933 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.933 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:48.933 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.933 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:48.933 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:48.933 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:48.933 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.933 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.933 [2024-12-05 12:52:31.305200] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:48.933 [2024-12-05 12:52:31.305224] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:48.933 [2024-12-05 12:52:31.305286] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:48.933 [2024-12-05 12:52:31.305336] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:48.933 [2024-12-05 12:52:31.305346] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:20:48.933 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.933 12:52:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64541 00:20:48.933 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64541 ']' 00:20:48.933 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64541 00:20:48.933 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:20:48.933 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:48.933 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64541 00:20:48.933 killing process with pid 64541 00:20:48.933 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:48.933 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:48.933 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64541' 00:20:48.933 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64541 00:20:48.933 [2024-12-05 12:52:31.336484] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:48.933 12:52:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64541 00:20:48.933 [2024-12-05 12:52:31.488551] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:49.503 ************************************ 00:20:49.503 END TEST raid_state_function_test_sb 00:20:49.503 ************************************ 00:20:49.503 12:52:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:20:49.503 00:20:49.503 real 0m7.538s 00:20:49.503 user 0m12.141s 00:20:49.503 sys 0m1.180s 00:20:49.503 12:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:49.503 12:52:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.762 12:52:32 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:20:49.762 12:52:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:49.762 12:52:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:49.762 12:52:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:49.762 ************************************ 00:20:49.762 START TEST raid_superblock_test 00:20:49.762 ************************************ 00:20:49.762 12:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:20:49.762 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:20:49.762 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:20:49.762 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:49.762 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:49.762 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:49.762 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:49.762 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:49.762 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:49.762 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:49.762 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:49.762 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:49.762 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:49.762 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:49.762 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:20:49.762 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:20:49.762 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:20:49.762 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65128 00:20:49.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.762 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65128 00:20:49.762 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:49.762 12:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65128 ']' 00:20:49.762 12:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.762 12:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:49.762 12:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.762 12:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:49.762 12:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.762 [2024-12-05 12:52:32.180431] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:20:49.762 [2024-12-05 12:52:32.180564] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65128 ] 00:20:49.762 [2024-12-05 12:52:32.326662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.021 [2024-12-05 12:52:32.412022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.021 [2024-12-05 12:52:32.523942] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:50.021 [2024-12-05 12:52:32.523997] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:50.589 12:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:50.589 12:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:20:50.589 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:50.589 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:50.589 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:50.589 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:50.589 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:50.589 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:50.589 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:50.589 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:50.589 12:52:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:20:50.589 12:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.589 12:52:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.589 malloc1 00:20:50.589 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.589 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:50.589 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.589 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.589 [2024-12-05 12:52:33.028030] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:50.589 [2024-12-05 12:52:33.028081] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:50.589 [2024-12-05 12:52:33.028098] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:50.589 [2024-12-05 12:52:33.028105] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:50.589 [2024-12-05 12:52:33.029868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:50.589 [2024-12-05 12:52:33.029897] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:50.589 pt1 00:20:50.589 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.589 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:50.589 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:50.589 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:50.589 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:50.589 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:50.589 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:50.589 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:50.589 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:50.589 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:20:50.589 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.589 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.589 malloc2 00:20:50.589 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.589 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:50.589 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.589 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.589 [2024-12-05 12:52:33.059503] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:50.589 [2024-12-05 12:52:33.059556] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:50.589 [2024-12-05 12:52:33.059578] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:50.589 [2024-12-05 12:52:33.059587] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:50.589 [2024-12-05 12:52:33.061329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:50.589 [2024-12-05 12:52:33.061358] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:50.589 pt2 00:20:50.589 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.589 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:50.589 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:50.589 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:20:50.589 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:20:50.589 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:50.589 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:50.589 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:50.589 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:50.589 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:20:50.589 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.589 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.589 malloc3 00:20:50.589 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.589 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:50.589 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.589 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.589 [2024-12-05 12:52:33.109372] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:50.589 [2024-12-05 12:52:33.109562] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:50.589 [2024-12-05 12:52:33.109587] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:50.589 [2024-12-05 12:52:33.109595] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:50.589 [2024-12-05 12:52:33.111306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:50.590 [2024-12-05 12:52:33.111335] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:50.590 pt3 00:20:50.590 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.590 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:50.590 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:50.590 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:20:50.590 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.590 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.590 [2024-12-05 12:52:33.117407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:50.590 [2024-12-05 12:52:33.118970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:50.590 [2024-12-05 12:52:33.119022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:50.590 [2024-12-05 12:52:33.119146] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:50.590 [2024-12-05 12:52:33.119156] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:50.590 [2024-12-05 12:52:33.119355] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:50.590 [2024-12-05 12:52:33.119465] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:50.590 [2024-12-05 12:52:33.119472] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:50.590 [2024-12-05 12:52:33.119606] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:50.590 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.590 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:20:50.590 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:50.590 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:50.590 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:50.590 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:50.590 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:50.590 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:50.590 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:50.590 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:50.590 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:50.590 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.590 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.590 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.590 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.590 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.590 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:50.590 "name": "raid_bdev1", 00:20:50.590 "uuid": "83fa0e24-3b40-432d-85ed-bef70b2d9d7f", 00:20:50.590 "strip_size_kb": 64, 00:20:50.590 "state": "online", 00:20:50.590 "raid_level": "concat", 00:20:50.590 "superblock": true, 00:20:50.590 "num_base_bdevs": 3, 00:20:50.590 "num_base_bdevs_discovered": 3, 00:20:50.590 "num_base_bdevs_operational": 3, 00:20:50.590 "base_bdevs_list": [ 00:20:50.590 { 00:20:50.590 "name": "pt1", 00:20:50.590 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:50.590 "is_configured": true, 00:20:50.590 "data_offset": 2048, 00:20:50.590 "data_size": 63488 00:20:50.590 }, 00:20:50.590 { 00:20:50.590 "name": "pt2", 00:20:50.590 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:50.590 "is_configured": true, 00:20:50.590 "data_offset": 2048, 00:20:50.590 "data_size": 63488 00:20:50.590 }, 00:20:50.590 { 00:20:50.590 "name": "pt3", 00:20:50.590 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:50.590 "is_configured": true, 00:20:50.590 "data_offset": 2048, 00:20:50.590 "data_size": 63488 00:20:50.590 } 00:20:50.590 ] 00:20:50.590 }' 00:20:50.590 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:50.590 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.849 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:50.849 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:50.849 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:50.849 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:50.849 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:50.849 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:50.849 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:50.849 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.127 [2024-12-05 12:52:33.437855] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:51.127 "name": "raid_bdev1", 00:20:51.127 "aliases": [ 00:20:51.127 "83fa0e24-3b40-432d-85ed-bef70b2d9d7f" 00:20:51.127 ], 00:20:51.127 "product_name": "Raid Volume", 00:20:51.127 "block_size": 512, 00:20:51.127 "num_blocks": 190464, 00:20:51.127 "uuid": "83fa0e24-3b40-432d-85ed-bef70b2d9d7f", 00:20:51.127 "assigned_rate_limits": { 00:20:51.127 "rw_ios_per_sec": 0, 00:20:51.127 "rw_mbytes_per_sec": 0, 00:20:51.127 "r_mbytes_per_sec": 0, 00:20:51.127 "w_mbytes_per_sec": 0 00:20:51.127 }, 00:20:51.127 "claimed": false, 00:20:51.127 "zoned": false, 00:20:51.127 "supported_io_types": { 00:20:51.127 "read": true, 00:20:51.127 "write": true, 00:20:51.127 "unmap": true, 00:20:51.127 "flush": true, 00:20:51.127 "reset": true, 00:20:51.127 "nvme_admin": false, 00:20:51.127 "nvme_io": false, 00:20:51.127 "nvme_io_md": false, 00:20:51.127 "write_zeroes": true, 00:20:51.127 "zcopy": false, 00:20:51.127 "get_zone_info": false, 00:20:51.127 "zone_management": false, 00:20:51.127 "zone_append": false, 00:20:51.127 "compare": false, 00:20:51.127 "compare_and_write": false, 00:20:51.127 "abort": false, 00:20:51.127 "seek_hole": false, 00:20:51.127 "seek_data": false, 00:20:51.127 "copy": false, 00:20:51.127 "nvme_iov_md": false 00:20:51.127 }, 00:20:51.127 "memory_domains": [ 00:20:51.127 { 00:20:51.127 "dma_device_id": "system", 00:20:51.127 "dma_device_type": 1 00:20:51.127 }, 00:20:51.127 { 00:20:51.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:51.127 "dma_device_type": 2 00:20:51.127 }, 00:20:51.127 { 00:20:51.127 "dma_device_id": "system", 00:20:51.127 "dma_device_type": 1 00:20:51.127 }, 00:20:51.127 { 00:20:51.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:51.127 "dma_device_type": 2 00:20:51.127 }, 00:20:51.127 { 00:20:51.127 "dma_device_id": "system", 00:20:51.127 "dma_device_type": 1 00:20:51.127 }, 00:20:51.127 { 00:20:51.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:51.127 "dma_device_type": 2 00:20:51.127 } 00:20:51.127 ], 00:20:51.127 "driver_specific": { 00:20:51.127 "raid": { 00:20:51.127 "uuid": "83fa0e24-3b40-432d-85ed-bef70b2d9d7f", 00:20:51.127 "strip_size_kb": 64, 00:20:51.127 "state": "online", 00:20:51.127 "raid_level": "concat", 00:20:51.127 "superblock": true, 00:20:51.127 "num_base_bdevs": 3, 00:20:51.127 "num_base_bdevs_discovered": 3, 00:20:51.127 "num_base_bdevs_operational": 3, 00:20:51.127 "base_bdevs_list": [ 00:20:51.127 { 00:20:51.127 "name": "pt1", 00:20:51.127 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:51.127 "is_configured": true, 00:20:51.127 "data_offset": 2048, 00:20:51.127 "data_size": 63488 00:20:51.127 }, 00:20:51.127 { 00:20:51.127 "name": "pt2", 00:20:51.127 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:51.127 "is_configured": true, 00:20:51.127 "data_offset": 2048, 00:20:51.127 "data_size": 63488 00:20:51.127 }, 00:20:51.127 { 00:20:51.127 "name": "pt3", 00:20:51.127 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:51.127 "is_configured": true, 00:20:51.127 "data_offset": 2048, 00:20:51.127 "data_size": 63488 00:20:51.127 } 00:20:51.127 ] 00:20:51.127 } 00:20:51.127 } 00:20:51.127 }' 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:51.127 pt2 00:20:51.127 pt3' 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.127 [2024-12-05 12:52:33.621935] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=83fa0e24-3b40-432d-85ed-bef70b2d9d7f 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 83fa0e24-3b40-432d-85ed-bef70b2d9d7f ']' 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.127 [2024-12-05 12:52:33.653515] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:51.127 [2024-12-05 12:52:33.653540] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:51.127 [2024-12-05 12:52:33.653608] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:51.127 [2024-12-05 12:52:33.653674] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:51.127 [2024-12-05 12:52:33.653684] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.127 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.386 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.386 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:51.386 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.386 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.386 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:51.386 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.386 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:51.386 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:51.386 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:20:51.386 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:51.386 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:51.386 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:51.386 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:51.386 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:51.386 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:51.386 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.386 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.386 [2024-12-05 12:52:33.757693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:51.386 [2024-12-05 12:52:33.760326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:51.386 [2024-12-05 12:52:33.760475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:51.386 [2024-12-05 12:52:33.760561] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:51.386 [2024-12-05 12:52:33.760714] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:51.386 [2024-12-05 12:52:33.760875] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:20:51.386 [2024-12-05 12:52:33.760951] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:51.386 [2024-12-05 12:52:33.760965] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:51.386 request: 00:20:51.386 { 00:20:51.386 "name": "raid_bdev1", 00:20:51.386 "raid_level": "concat", 00:20:51.386 "base_bdevs": [ 00:20:51.386 "malloc1", 00:20:51.386 "malloc2", 00:20:51.386 "malloc3" 00:20:51.386 ], 00:20:51.386 "strip_size_kb": 64, 00:20:51.386 "superblock": false, 00:20:51.386 "method": "bdev_raid_create", 00:20:51.386 "req_id": 1 00:20:51.386 } 00:20:51.386 Got JSON-RPC error response 00:20:51.386 response: 00:20:51.386 { 00:20:51.386 "code": -17, 00:20:51.386 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:51.386 } 00:20:51.386 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:51.386 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:20:51.386 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:51.386 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:51.386 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:51.386 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.386 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.386 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:51.386 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.386 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.386 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:51.386 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:51.386 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:51.386 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.386 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.386 [2024-12-05 12:52:33.793665] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:51.386 [2024-12-05 12:52:33.793713] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:51.386 [2024-12-05 12:52:33.793733] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:51.387 [2024-12-05 12:52:33.793744] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:51.387 [2024-12-05 12:52:33.796338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:51.387 [2024-12-05 12:52:33.796378] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:51.387 [2024-12-05 12:52:33.796459] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:51.387 [2024-12-05 12:52:33.796527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:51.387 pt1 00:20:51.387 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.387 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:20:51.387 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:51.387 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:51.387 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:51.387 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:51.387 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:51.387 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:51.387 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:51.387 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:51.387 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:51.387 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.387 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.387 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.387 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.387 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.387 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:51.387 "name": "raid_bdev1", 00:20:51.387 "uuid": "83fa0e24-3b40-432d-85ed-bef70b2d9d7f", 00:20:51.387 "strip_size_kb": 64, 00:20:51.387 "state": "configuring", 00:20:51.387 "raid_level": "concat", 00:20:51.387 "superblock": true, 00:20:51.387 "num_base_bdevs": 3, 00:20:51.387 "num_base_bdevs_discovered": 1, 00:20:51.387 "num_base_bdevs_operational": 3, 00:20:51.387 "base_bdevs_list": [ 00:20:51.387 { 00:20:51.387 "name": "pt1", 00:20:51.387 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:51.387 "is_configured": true, 00:20:51.387 "data_offset": 2048, 00:20:51.387 "data_size": 63488 00:20:51.387 }, 00:20:51.387 { 00:20:51.387 "name": null, 00:20:51.387 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:51.387 "is_configured": false, 00:20:51.387 "data_offset": 2048, 00:20:51.387 "data_size": 63488 00:20:51.387 }, 00:20:51.387 { 00:20:51.387 "name": null, 00:20:51.387 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:51.387 "is_configured": false, 00:20:51.387 "data_offset": 2048, 00:20:51.387 "data_size": 63488 00:20:51.387 } 00:20:51.387 ] 00:20:51.387 }' 00:20:51.387 12:52:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:51.387 12:52:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.645 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:20:51.645 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:51.645 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.645 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.645 [2024-12-05 12:52:34.093751] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:51.645 [2024-12-05 12:52:34.093816] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:51.645 [2024-12-05 12:52:34.093839] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:20:51.645 [2024-12-05 12:52:34.093848] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:51.645 [2024-12-05 12:52:34.094241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:51.645 [2024-12-05 12:52:34.094260] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:51.645 [2024-12-05 12:52:34.094352] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:51.645 [2024-12-05 12:52:34.094380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:51.645 pt2 00:20:51.645 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.645 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:20:51.645 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.645 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.645 [2024-12-05 12:52:34.101744] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:51.645 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.645 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:20:51.645 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:51.645 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:51.645 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:51.645 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:51.645 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:51.645 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:51.645 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:51.645 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:51.645 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:51.645 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.645 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.645 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.645 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.645 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.645 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:51.645 "name": "raid_bdev1", 00:20:51.645 "uuid": "83fa0e24-3b40-432d-85ed-bef70b2d9d7f", 00:20:51.645 "strip_size_kb": 64, 00:20:51.645 "state": "configuring", 00:20:51.645 "raid_level": "concat", 00:20:51.645 "superblock": true, 00:20:51.645 "num_base_bdevs": 3, 00:20:51.645 "num_base_bdevs_discovered": 1, 00:20:51.645 "num_base_bdevs_operational": 3, 00:20:51.645 "base_bdevs_list": [ 00:20:51.645 { 00:20:51.645 "name": "pt1", 00:20:51.645 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:51.645 "is_configured": true, 00:20:51.645 "data_offset": 2048, 00:20:51.645 "data_size": 63488 00:20:51.645 }, 00:20:51.645 { 00:20:51.645 "name": null, 00:20:51.645 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:51.645 "is_configured": false, 00:20:51.645 "data_offset": 0, 00:20:51.645 "data_size": 63488 00:20:51.645 }, 00:20:51.645 { 00:20:51.645 "name": null, 00:20:51.645 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:51.645 "is_configured": false, 00:20:51.645 "data_offset": 2048, 00:20:51.645 "data_size": 63488 00:20:51.645 } 00:20:51.645 ] 00:20:51.645 }' 00:20:51.645 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:51.645 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.906 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:51.906 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:51.906 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:51.906 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.906 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.906 [2024-12-05 12:52:34.393808] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:51.906 [2024-12-05 12:52:34.393866] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:51.906 [2024-12-05 12:52:34.393882] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:51.906 [2024-12-05 12:52:34.393892] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:51.906 [2024-12-05 12:52:34.394320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:51.906 [2024-12-05 12:52:34.394348] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:51.906 [2024-12-05 12:52:34.394414] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:51.906 [2024-12-05 12:52:34.394435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:51.906 pt2 00:20:51.906 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.906 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:51.906 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:51.906 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:51.906 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.906 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.906 [2024-12-05 12:52:34.401796] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:51.906 [2024-12-05 12:52:34.401836] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:51.906 [2024-12-05 12:52:34.401850] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:51.906 [2024-12-05 12:52:34.401860] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:51.906 [2024-12-05 12:52:34.402208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:51.906 [2024-12-05 12:52:34.402237] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:51.906 [2024-12-05 12:52:34.402289] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:51.906 [2024-12-05 12:52:34.402307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:51.906 [2024-12-05 12:52:34.402416] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:51.906 [2024-12-05 12:52:34.402431] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:51.906 [2024-12-05 12:52:34.402675] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:51.906 [2024-12-05 12:52:34.402811] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:51.906 [2024-12-05 12:52:34.402819] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:51.906 [2024-12-05 12:52:34.402940] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:51.906 pt3 00:20:51.906 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.906 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:51.906 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:51.906 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:20:51.906 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:51.906 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:51.906 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:51.906 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:51.906 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:51.906 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:51.906 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:51.906 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:51.906 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:51.906 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.906 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.906 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.906 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.906 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.906 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:51.906 "name": "raid_bdev1", 00:20:51.906 "uuid": "83fa0e24-3b40-432d-85ed-bef70b2d9d7f", 00:20:51.906 "strip_size_kb": 64, 00:20:51.906 "state": "online", 00:20:51.906 "raid_level": "concat", 00:20:51.906 "superblock": true, 00:20:51.906 "num_base_bdevs": 3, 00:20:51.906 "num_base_bdevs_discovered": 3, 00:20:51.906 "num_base_bdevs_operational": 3, 00:20:51.906 "base_bdevs_list": [ 00:20:51.906 { 00:20:51.906 "name": "pt1", 00:20:51.906 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:51.906 "is_configured": true, 00:20:51.906 "data_offset": 2048, 00:20:51.906 "data_size": 63488 00:20:51.906 }, 00:20:51.906 { 00:20:51.906 "name": "pt2", 00:20:51.906 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:51.906 "is_configured": true, 00:20:51.906 "data_offset": 2048, 00:20:51.906 "data_size": 63488 00:20:51.906 }, 00:20:51.906 { 00:20:51.906 "name": "pt3", 00:20:51.906 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:51.906 "is_configured": true, 00:20:51.906 "data_offset": 2048, 00:20:51.906 "data_size": 63488 00:20:51.906 } 00:20:51.906 ] 00:20:51.906 }' 00:20:51.906 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:51.906 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.166 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:52.166 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:52.166 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:52.166 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:52.166 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:52.166 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:52.166 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:52.166 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:52.166 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.166 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.166 [2024-12-05 12:52:34.730204] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:52.166 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.427 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:52.427 "name": "raid_bdev1", 00:20:52.427 "aliases": [ 00:20:52.427 "83fa0e24-3b40-432d-85ed-bef70b2d9d7f" 00:20:52.427 ], 00:20:52.427 "product_name": "Raid Volume", 00:20:52.427 "block_size": 512, 00:20:52.427 "num_blocks": 190464, 00:20:52.427 "uuid": "83fa0e24-3b40-432d-85ed-bef70b2d9d7f", 00:20:52.427 "assigned_rate_limits": { 00:20:52.427 "rw_ios_per_sec": 0, 00:20:52.427 "rw_mbytes_per_sec": 0, 00:20:52.427 "r_mbytes_per_sec": 0, 00:20:52.427 "w_mbytes_per_sec": 0 00:20:52.427 }, 00:20:52.427 "claimed": false, 00:20:52.427 "zoned": false, 00:20:52.427 "supported_io_types": { 00:20:52.427 "read": true, 00:20:52.427 "write": true, 00:20:52.427 "unmap": true, 00:20:52.427 "flush": true, 00:20:52.427 "reset": true, 00:20:52.427 "nvme_admin": false, 00:20:52.427 "nvme_io": false, 00:20:52.427 "nvme_io_md": false, 00:20:52.427 "write_zeroes": true, 00:20:52.427 "zcopy": false, 00:20:52.427 "get_zone_info": false, 00:20:52.427 "zone_management": false, 00:20:52.427 "zone_append": false, 00:20:52.427 "compare": false, 00:20:52.427 "compare_and_write": false, 00:20:52.427 "abort": false, 00:20:52.427 "seek_hole": false, 00:20:52.427 "seek_data": false, 00:20:52.427 "copy": false, 00:20:52.427 "nvme_iov_md": false 00:20:52.427 }, 00:20:52.427 "memory_domains": [ 00:20:52.427 { 00:20:52.427 "dma_device_id": "system", 00:20:52.427 "dma_device_type": 1 00:20:52.427 }, 00:20:52.427 { 00:20:52.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:52.427 "dma_device_type": 2 00:20:52.427 }, 00:20:52.427 { 00:20:52.427 "dma_device_id": "system", 00:20:52.427 "dma_device_type": 1 00:20:52.427 }, 00:20:52.427 { 00:20:52.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:52.427 "dma_device_type": 2 00:20:52.427 }, 00:20:52.427 { 00:20:52.427 "dma_device_id": "system", 00:20:52.427 "dma_device_type": 1 00:20:52.427 }, 00:20:52.427 { 00:20:52.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:52.427 "dma_device_type": 2 00:20:52.427 } 00:20:52.427 ], 00:20:52.427 "driver_specific": { 00:20:52.427 "raid": { 00:20:52.427 "uuid": "83fa0e24-3b40-432d-85ed-bef70b2d9d7f", 00:20:52.427 "strip_size_kb": 64, 00:20:52.427 "state": "online", 00:20:52.427 "raid_level": "concat", 00:20:52.427 "superblock": true, 00:20:52.427 "num_base_bdevs": 3, 00:20:52.427 "num_base_bdevs_discovered": 3, 00:20:52.427 "num_base_bdevs_operational": 3, 00:20:52.427 "base_bdevs_list": [ 00:20:52.427 { 00:20:52.427 "name": "pt1", 00:20:52.427 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:52.427 "is_configured": true, 00:20:52.427 "data_offset": 2048, 00:20:52.427 "data_size": 63488 00:20:52.427 }, 00:20:52.427 { 00:20:52.427 "name": "pt2", 00:20:52.427 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:52.427 "is_configured": true, 00:20:52.427 "data_offset": 2048, 00:20:52.427 "data_size": 63488 00:20:52.427 }, 00:20:52.427 { 00:20:52.427 "name": "pt3", 00:20:52.427 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:52.427 "is_configured": true, 00:20:52.427 "data_offset": 2048, 00:20:52.427 "data_size": 63488 00:20:52.427 } 00:20:52.427 ] 00:20:52.427 } 00:20:52.427 } 00:20:52.427 }' 00:20:52.427 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:52.427 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:52.427 pt2 00:20:52.427 pt3' 00:20:52.427 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:52.427 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:52.427 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:52.427 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:52.427 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.427 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.427 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:52.427 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.427 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:52.427 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:52.427 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:52.427 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:52.427 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.427 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.427 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:52.427 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.427 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:52.427 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:52.428 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:52.428 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:20:52.428 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:52.428 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.428 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.428 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.428 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:52.428 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:52.428 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:52.428 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.428 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.428 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:52.428 [2024-12-05 12:52:34.906214] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:52.428 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.428 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 83fa0e24-3b40-432d-85ed-bef70b2d9d7f '!=' 83fa0e24-3b40-432d-85ed-bef70b2d9d7f ']' 00:20:52.428 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:20:52.428 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:52.428 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:20:52.428 12:52:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65128 00:20:52.428 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65128 ']' 00:20:52.428 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65128 00:20:52.428 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:20:52.428 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:52.428 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65128 00:20:52.428 killing process with pid 65128 00:20:52.428 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:52.428 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:52.428 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65128' 00:20:52.428 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65128 00:20:52.428 12:52:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65128 00:20:52.428 [2024-12-05 12:52:34.954087] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:52.428 [2024-12-05 12:52:34.954167] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:52.428 [2024-12-05 12:52:34.954224] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:52.428 [2024-12-05 12:52:34.954240] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:52.686 [2024-12-05 12:52:35.139799] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:53.625 ************************************ 00:20:53.625 END TEST raid_superblock_test 00:20:53.625 ************************************ 00:20:53.625 12:52:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:20:53.625 00:20:53.625 real 0m3.727s 00:20:53.625 user 0m5.349s 00:20:53.625 sys 0m0.518s 00:20:53.625 12:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:53.625 12:52:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.625 12:52:35 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:20:53.625 12:52:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:53.625 12:52:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:53.625 12:52:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:53.625 ************************************ 00:20:53.625 START TEST raid_read_error_test 00:20:53.625 ************************************ 00:20:53.625 12:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:20:53.625 12:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:20:53.625 12:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:20:53.625 12:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:20:53.625 12:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:20:53.625 12:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:53.625 12:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:20:53.625 12:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:53.625 12:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:53.625 12:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:20:53.625 12:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:53.625 12:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:53.625 12:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:20:53.625 12:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:53.625 12:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:53.625 12:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:53.625 12:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:20:53.625 12:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:20:53.625 12:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:20:53.625 12:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:20:53.625 12:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:20:53.625 12:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:20:53.625 12:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:20:53.625 12:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:20:53.625 12:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:20:53.625 12:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:20:53.625 12:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.sgicJjDkuV 00:20:53.626 12:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65370 00:20:53.626 12:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65370 00:20:53.626 12:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65370 ']' 00:20:53.626 12:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.626 12:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:53.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.626 12:52:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:53.626 12:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.626 12:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:53.626 12:52:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.626 [2024-12-05 12:52:35.940775] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:20:53.626 [2024-12-05 12:52:35.940889] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65370 ] 00:20:53.626 [2024-12-05 12:52:36.098517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.626 [2024-12-05 12:52:36.200451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.886 [2024-12-05 12:52:36.338648] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:53.886 [2024-12-05 12:52:36.338702] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.455 BaseBdev1_malloc 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.455 true 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.455 [2024-12-05 12:52:36.819080] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:54.455 [2024-12-05 12:52:36.819278] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:54.455 [2024-12-05 12:52:36.819304] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:20:54.455 [2024-12-05 12:52:36.819314] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:54.455 [2024-12-05 12:52:36.821454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:54.455 [2024-12-05 12:52:36.821507] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:54.455 BaseBdev1 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.455 BaseBdev2_malloc 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.455 true 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.455 [2024-12-05 12:52:36.863095] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:54.455 [2024-12-05 12:52:36.863138] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:54.455 [2024-12-05 12:52:36.863153] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:20:54.455 [2024-12-05 12:52:36.863163] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:54.455 [2024-12-05 12:52:36.865248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:54.455 [2024-12-05 12:52:36.865283] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:54.455 BaseBdev2 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.455 BaseBdev3_malloc 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.455 true 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.455 [2024-12-05 12:52:36.925978] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:54.455 [2024-12-05 12:52:36.926024] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:54.455 [2024-12-05 12:52:36.926041] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:54.455 [2024-12-05 12:52:36.926051] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:54.455 [2024-12-05 12:52:36.928143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:54.455 [2024-12-05 12:52:36.928179] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:54.455 BaseBdev3 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.455 [2024-12-05 12:52:36.934054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:54.455 [2024-12-05 12:52:36.935951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:54.455 [2024-12-05 12:52:36.936027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:54.455 [2024-12-05 12:52:36.936222] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:54.455 [2024-12-05 12:52:36.936233] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:54.455 [2024-12-05 12:52:36.936474] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:20:54.455 [2024-12-05 12:52:36.936638] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:54.455 [2024-12-05 12:52:36.936739] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:54.455 [2024-12-05 12:52:36.936884] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.455 12:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.456 12:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.456 12:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:54.456 "name": "raid_bdev1", 00:20:54.456 "uuid": "6894b46d-2a53-40f1-aaa9-ab0ea5d2ed4f", 00:20:54.456 "strip_size_kb": 64, 00:20:54.456 "state": "online", 00:20:54.456 "raid_level": "concat", 00:20:54.456 "superblock": true, 00:20:54.456 "num_base_bdevs": 3, 00:20:54.456 "num_base_bdevs_discovered": 3, 00:20:54.456 "num_base_bdevs_operational": 3, 00:20:54.456 "base_bdevs_list": [ 00:20:54.456 { 00:20:54.456 "name": "BaseBdev1", 00:20:54.456 "uuid": "385efcfb-47bb-568a-a8c9-c51fd6c25b7f", 00:20:54.456 "is_configured": true, 00:20:54.456 "data_offset": 2048, 00:20:54.456 "data_size": 63488 00:20:54.456 }, 00:20:54.456 { 00:20:54.456 "name": "BaseBdev2", 00:20:54.456 "uuid": "2ca24e5a-535d-5f2d-9c92-d712c52803c7", 00:20:54.456 "is_configured": true, 00:20:54.456 "data_offset": 2048, 00:20:54.456 "data_size": 63488 00:20:54.456 }, 00:20:54.456 { 00:20:54.456 "name": "BaseBdev3", 00:20:54.456 "uuid": "626125f2-665d-5d30-aff2-740b8b356f4a", 00:20:54.456 "is_configured": true, 00:20:54.456 "data_offset": 2048, 00:20:54.456 "data_size": 63488 00:20:54.456 } 00:20:54.456 ] 00:20:54.456 }' 00:20:54.456 12:52:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:54.456 12:52:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.716 12:52:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:20:54.716 12:52:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:20:54.977 [2024-12-05 12:52:37.335075] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:20:55.918 12:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:20:55.918 12:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.918 12:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.918 12:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.918 12:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:20:55.918 12:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:20:55.918 12:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:20:55.918 12:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:20:55.918 12:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:55.918 12:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:55.918 12:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:55.918 12:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:55.918 12:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:55.918 12:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:55.918 12:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:55.918 12:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:55.918 12:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:55.918 12:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.918 12:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.918 12:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.918 12:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.918 12:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.918 12:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:55.918 "name": "raid_bdev1", 00:20:55.918 "uuid": "6894b46d-2a53-40f1-aaa9-ab0ea5d2ed4f", 00:20:55.918 "strip_size_kb": 64, 00:20:55.918 "state": "online", 00:20:55.918 "raid_level": "concat", 00:20:55.918 "superblock": true, 00:20:55.918 "num_base_bdevs": 3, 00:20:55.918 "num_base_bdevs_discovered": 3, 00:20:55.918 "num_base_bdevs_operational": 3, 00:20:55.918 "base_bdevs_list": [ 00:20:55.918 { 00:20:55.918 "name": "BaseBdev1", 00:20:55.918 "uuid": "385efcfb-47bb-568a-a8c9-c51fd6c25b7f", 00:20:55.918 "is_configured": true, 00:20:55.918 "data_offset": 2048, 00:20:55.918 "data_size": 63488 00:20:55.918 }, 00:20:55.918 { 00:20:55.918 "name": "BaseBdev2", 00:20:55.918 "uuid": "2ca24e5a-535d-5f2d-9c92-d712c52803c7", 00:20:55.918 "is_configured": true, 00:20:55.918 "data_offset": 2048, 00:20:55.918 "data_size": 63488 00:20:55.918 }, 00:20:55.918 { 00:20:55.918 "name": "BaseBdev3", 00:20:55.918 "uuid": "626125f2-665d-5d30-aff2-740b8b356f4a", 00:20:55.918 "is_configured": true, 00:20:55.918 "data_offset": 2048, 00:20:55.918 "data_size": 63488 00:20:55.918 } 00:20:55.918 ] 00:20:55.918 }' 00:20:55.918 12:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:55.919 12:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.178 12:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:56.178 12:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.178 12:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.178 [2024-12-05 12:52:38.564978] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:56.178 [2024-12-05 12:52:38.565006] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:56.178 [2024-12-05 12:52:38.568086] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:56.178 [2024-12-05 12:52:38.568136] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:56.178 [2024-12-05 12:52:38.568172] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:56.178 [2024-12-05 12:52:38.568183] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:56.178 { 00:20:56.178 "results": [ 00:20:56.178 { 00:20:56.178 "job": "raid_bdev1", 00:20:56.178 "core_mask": "0x1", 00:20:56.178 "workload": "randrw", 00:20:56.178 "percentage": 50, 00:20:56.178 "status": "finished", 00:20:56.178 "queue_depth": 1, 00:20:56.178 "io_size": 131072, 00:20:56.178 "runtime": 1.228145, 00:20:56.178 "iops": 14347.654389343277, 00:20:56.178 "mibps": 1793.4567986679097, 00:20:56.178 "io_failed": 1, 00:20:56.178 "io_timeout": 0, 00:20:56.178 "avg_latency_us": 95.07191255685639, 00:20:56.178 "min_latency_us": 34.067692307692305, 00:20:56.178 "max_latency_us": 1688.8123076923077 00:20:56.178 } 00:20:56.178 ], 00:20:56.178 "core_count": 1 00:20:56.178 } 00:20:56.178 12:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.178 12:52:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65370 00:20:56.178 12:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65370 ']' 00:20:56.178 12:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65370 00:20:56.178 12:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:20:56.178 12:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:56.178 12:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65370 00:20:56.178 killing process with pid 65370 00:20:56.178 12:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:56.178 12:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:56.178 12:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65370' 00:20:56.178 12:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65370 00:20:56.178 [2024-12-05 12:52:38.597174] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:56.178 12:52:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65370 00:20:56.178 [2024-12-05 12:52:38.739311] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:57.114 12:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:20:57.114 12:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:20:57.114 12:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.sgicJjDkuV 00:20:57.114 12:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.81 00:20:57.114 12:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:20:57.114 12:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:57.114 12:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:20:57.114 12:52:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.81 != \0\.\0\0 ]] 00:20:57.114 ************************************ 00:20:57.114 END TEST raid_read_error_test 00:20:57.114 ************************************ 00:20:57.114 00:20:57.114 real 0m3.628s 00:20:57.114 user 0m4.280s 00:20:57.114 sys 0m0.394s 00:20:57.114 12:52:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:57.114 12:52:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.114 12:52:39 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:20:57.114 12:52:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:57.114 12:52:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:57.114 12:52:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:57.114 ************************************ 00:20:57.114 START TEST raid_write_error_test 00:20:57.114 ************************************ 00:20:57.114 12:52:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:20:57.114 12:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:20:57.114 12:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:20:57.114 12:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:20:57.114 12:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:20:57.114 12:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:57.114 12:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:20:57.114 12:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:57.114 12:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:57.114 12:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:20:57.114 12:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:57.114 12:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:57.114 12:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:20:57.114 12:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:20:57.114 12:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:20:57.114 12:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:57.114 12:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:20:57.114 12:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:20:57.114 12:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:20:57.114 12:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:20:57.114 12:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:20:57.114 12:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:20:57.114 12:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:20:57.114 12:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:20:57.114 12:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:20:57.114 12:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:20:57.114 12:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.RnFRa5Dfmj 00:20:57.114 12:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65499 00:20:57.114 12:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65499 00:20:57.114 12:52:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:57.114 12:52:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65499 ']' 00:20:57.114 12:52:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.114 12:52:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:57.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.114 12:52:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.114 12:52:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:57.114 12:52:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.114 [2024-12-05 12:52:39.605383] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:20:57.114 [2024-12-05 12:52:39.605714] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65499 ] 00:20:57.374 [2024-12-05 12:52:39.769110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.374 [2024-12-05 12:52:39.870869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.632 [2024-12-05 12:52:40.007534] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:57.632 [2024-12-05 12:52:40.007572] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:57.893 12:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:57.893 12:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:20:57.893 12:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:57.894 12:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:57.894 12:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.894 12:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.894 BaseBdev1_malloc 00:20:57.894 12:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.894 12:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:20:57.894 12:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.894 12:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.894 true 00:20:57.894 12:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.894 12:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:57.894 12:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.894 12:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.894 [2024-12-05 12:52:40.451234] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:57.894 [2024-12-05 12:52:40.451288] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:57.894 [2024-12-05 12:52:40.451307] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:20:57.894 [2024-12-05 12:52:40.451317] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:57.894 [2024-12-05 12:52:40.453438] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:57.894 [2024-12-05 12:52:40.453475] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:57.894 BaseBdev1 00:20:57.894 12:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.894 12:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:57.894 12:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:57.894 12:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.894 12:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.157 BaseBdev2_malloc 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.157 true 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.157 [2024-12-05 12:52:40.495609] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:58.157 [2024-12-05 12:52:40.495776] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:58.157 [2024-12-05 12:52:40.495799] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:20:58.157 [2024-12-05 12:52:40.495809] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:58.157 [2024-12-05 12:52:40.497904] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:58.157 [2024-12-05 12:52:40.497934] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:58.157 BaseBdev2 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.157 BaseBdev3_malloc 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.157 true 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.157 [2024-12-05 12:52:40.563604] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:58.157 [2024-12-05 12:52:40.563656] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:58.157 [2024-12-05 12:52:40.563673] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:58.157 [2024-12-05 12:52:40.563683] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:58.157 [2024-12-05 12:52:40.565807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:58.157 [2024-12-05 12:52:40.565965] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:58.157 BaseBdev3 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.157 [2024-12-05 12:52:40.571682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:58.157 [2024-12-05 12:52:40.573611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:58.157 [2024-12-05 12:52:40.573704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:58.157 [2024-12-05 12:52:40.573921] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:58.157 [2024-12-05 12:52:40.574002] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:58.157 [2024-12-05 12:52:40.574273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:20:58.157 [2024-12-05 12:52:40.574507] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:58.157 [2024-12-05 12:52:40.574582] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:58.157 [2024-12-05 12:52:40.574777] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:58.157 "name": "raid_bdev1", 00:20:58.157 "uuid": "a2746a88-48cd-41d9-bfc2-85770618488c", 00:20:58.157 "strip_size_kb": 64, 00:20:58.157 "state": "online", 00:20:58.157 "raid_level": "concat", 00:20:58.157 "superblock": true, 00:20:58.157 "num_base_bdevs": 3, 00:20:58.157 "num_base_bdevs_discovered": 3, 00:20:58.157 "num_base_bdevs_operational": 3, 00:20:58.157 "base_bdevs_list": [ 00:20:58.157 { 00:20:58.157 "name": "BaseBdev1", 00:20:58.157 "uuid": "38d4b219-b1e6-573d-a796-2bcc5ea5ca4d", 00:20:58.157 "is_configured": true, 00:20:58.157 "data_offset": 2048, 00:20:58.157 "data_size": 63488 00:20:58.157 }, 00:20:58.157 { 00:20:58.157 "name": "BaseBdev2", 00:20:58.157 "uuid": "b57cb0bb-e203-5c7d-84d0-a57a45051257", 00:20:58.157 "is_configured": true, 00:20:58.157 "data_offset": 2048, 00:20:58.157 "data_size": 63488 00:20:58.157 }, 00:20:58.157 { 00:20:58.157 "name": "BaseBdev3", 00:20:58.157 "uuid": "46f44d44-9e9b-59ba-bcac-50aba0aec1f1", 00:20:58.157 "is_configured": true, 00:20:58.157 "data_offset": 2048, 00:20:58.157 "data_size": 63488 00:20:58.157 } 00:20:58.157 ] 00:20:58.157 }' 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:58.157 12:52:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.419 12:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:20:58.419 12:52:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:20:58.419 [2024-12-05 12:52:40.964707] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:20:59.356 12:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:20:59.356 12:52:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.356 12:52:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.356 12:52:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.356 12:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:20:59.356 12:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:20:59.356 12:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:20:59.356 12:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:20:59.356 12:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:59.356 12:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:59.356 12:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:20:59.356 12:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:59.356 12:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:59.356 12:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:59.356 12:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:59.356 12:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:59.356 12:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:59.356 12:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.356 12:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.356 12:52:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.356 12:52:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.356 12:52:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.356 12:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:59.356 "name": "raid_bdev1", 00:20:59.356 "uuid": "a2746a88-48cd-41d9-bfc2-85770618488c", 00:20:59.356 "strip_size_kb": 64, 00:20:59.356 "state": "online", 00:20:59.356 "raid_level": "concat", 00:20:59.356 "superblock": true, 00:20:59.356 "num_base_bdevs": 3, 00:20:59.356 "num_base_bdevs_discovered": 3, 00:20:59.356 "num_base_bdevs_operational": 3, 00:20:59.356 "base_bdevs_list": [ 00:20:59.356 { 00:20:59.356 "name": "BaseBdev1", 00:20:59.356 "uuid": "38d4b219-b1e6-573d-a796-2bcc5ea5ca4d", 00:20:59.356 "is_configured": true, 00:20:59.356 "data_offset": 2048, 00:20:59.356 "data_size": 63488 00:20:59.356 }, 00:20:59.356 { 00:20:59.356 "name": "BaseBdev2", 00:20:59.356 "uuid": "b57cb0bb-e203-5c7d-84d0-a57a45051257", 00:20:59.356 "is_configured": true, 00:20:59.356 "data_offset": 2048, 00:20:59.356 "data_size": 63488 00:20:59.356 }, 00:20:59.356 { 00:20:59.356 "name": "BaseBdev3", 00:20:59.356 "uuid": "46f44d44-9e9b-59ba-bcac-50aba0aec1f1", 00:20:59.356 "is_configured": true, 00:20:59.356 "data_offset": 2048, 00:20:59.356 "data_size": 63488 00:20:59.356 } 00:20:59.356 ] 00:20:59.356 }' 00:20:59.356 12:52:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:59.356 12:52:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.614 12:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:59.615 12:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.615 12:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.615 [2024-12-05 12:52:42.193762] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:59.615 [2024-12-05 12:52:42.193788] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:59.615 [2024-12-05 12:52:42.196226] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:59.615 [2024-12-05 12:52:42.196360] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:59.615 [2024-12-05 12:52:42.196399] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:59.615 [2024-12-05 12:52:42.196407] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:59.615 { 00:20:59.615 "results": [ 00:20:59.615 { 00:20:59.615 "job": "raid_bdev1", 00:20:59.615 "core_mask": "0x1", 00:20:59.615 "workload": "randrw", 00:20:59.615 "percentage": 50, 00:20:59.615 "status": "finished", 00:20:59.615 "queue_depth": 1, 00:20:59.615 "io_size": 131072, 00:20:59.615 "runtime": 1.227079, 00:20:59.615 "iops": 16021.788328216846, 00:20:59.615 "mibps": 2002.7235410271057, 00:20:59.615 "io_failed": 1, 00:20:59.615 "io_timeout": 0, 00:20:59.615 "avg_latency_us": 85.27251998294163, 00:20:59.615 "min_latency_us": 26.78153846153846, 00:20:59.615 "max_latency_us": 1335.9261538461537 00:20:59.615 } 00:20:59.615 ], 00:20:59.615 "core_count": 1 00:20:59.615 } 00:20:59.874 12:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.874 12:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65499 00:20:59.874 12:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65499 ']' 00:20:59.874 12:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65499 00:20:59.874 12:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:20:59.874 12:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:59.874 12:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65499 00:20:59.874 killing process with pid 65499 00:20:59.874 12:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:59.874 12:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:59.874 12:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65499' 00:20:59.874 12:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65499 00:20:59.874 [2024-12-05 12:52:42.223054] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:59.874 12:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65499 00:20:59.874 [2024-12-05 12:52:42.336621] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:00.447 12:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:21:00.447 12:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.RnFRa5Dfmj 00:21:00.447 12:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:21:00.447 ************************************ 00:21:00.447 END TEST raid_write_error_test 00:21:00.447 ************************************ 00:21:00.447 12:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.81 00:21:00.447 12:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:21:00.447 12:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:00.447 12:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:21:00.447 12:52:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.81 != \0\.\0\0 ]] 00:21:00.447 00:21:00.447 real 0m3.416s 00:21:00.447 user 0m4.062s 00:21:00.447 sys 0m0.360s 00:21:00.447 12:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:00.447 12:52:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.447 12:52:42 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:21:00.447 12:52:42 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:21:00.447 12:52:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:00.447 12:52:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:00.447 12:52:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:00.447 ************************************ 00:21:00.447 START TEST raid_state_function_test 00:21:00.447 ************************************ 00:21:00.447 12:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:21:00.447 12:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:21:00.447 12:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:21:00.447 12:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:21:00.447 12:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:00.447 12:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:00.447 12:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:00.447 12:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:00.447 12:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:00.447 12:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:00.447 12:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:00.447 12:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:00.447 12:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:00.447 12:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:21:00.447 12:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:00.447 12:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:00.447 Process raid pid: 65632 00:21:00.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.447 12:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:00.447 12:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:00.447 12:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:00.447 12:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:00.447 12:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:00.447 12:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:00.447 12:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:21:00.447 12:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:21:00.447 12:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:21:00.447 12:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:21:00.447 12:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65632 00:21:00.447 12:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65632' 00:21:00.447 12:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65632 00:21:00.447 12:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65632 ']' 00:21:00.447 12:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:00.447 12:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.447 12:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:00.447 12:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.447 12:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:00.447 12:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.706 [2024-12-05 12:52:43.055808] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:21:00.706 [2024-12-05 12:52:43.055946] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:00.706 [2024-12-05 12:52:43.218070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.966 [2024-12-05 12:52:43.320696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.966 [2024-12-05 12:52:43.459001] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:00.966 [2024-12-05 12:52:43.459038] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:01.533 12:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:01.533 12:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:21:01.533 12:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:01.533 12:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.533 12:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.533 [2024-12-05 12:52:43.930666] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:01.533 [2024-12-05 12:52:43.930720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:01.533 [2024-12-05 12:52:43.930730] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:01.533 [2024-12-05 12:52:43.930740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:01.533 [2024-12-05 12:52:43.930746] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:01.533 [2024-12-05 12:52:43.930754] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:01.533 12:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.533 12:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:01.533 12:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:01.533 12:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:01.533 12:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:01.533 12:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:01.533 12:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:01.533 12:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:01.533 12:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:01.533 12:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:01.533 12:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:01.533 12:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.533 12:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:01.533 12:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.533 12:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.533 12:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.533 12:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:01.533 "name": "Existed_Raid", 00:21:01.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.533 "strip_size_kb": 0, 00:21:01.533 "state": "configuring", 00:21:01.533 "raid_level": "raid1", 00:21:01.533 "superblock": false, 00:21:01.533 "num_base_bdevs": 3, 00:21:01.533 "num_base_bdevs_discovered": 0, 00:21:01.533 "num_base_bdevs_operational": 3, 00:21:01.533 "base_bdevs_list": [ 00:21:01.533 { 00:21:01.533 "name": "BaseBdev1", 00:21:01.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.534 "is_configured": false, 00:21:01.534 "data_offset": 0, 00:21:01.534 "data_size": 0 00:21:01.534 }, 00:21:01.534 { 00:21:01.534 "name": "BaseBdev2", 00:21:01.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.534 "is_configured": false, 00:21:01.534 "data_offset": 0, 00:21:01.534 "data_size": 0 00:21:01.534 }, 00:21:01.534 { 00:21:01.534 "name": "BaseBdev3", 00:21:01.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.534 "is_configured": false, 00:21:01.534 "data_offset": 0, 00:21:01.534 "data_size": 0 00:21:01.534 } 00:21:01.534 ] 00:21:01.534 }' 00:21:01.534 12:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:01.534 12:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.792 12:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.793 [2024-12-05 12:52:44.266699] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:01.793 [2024-12-05 12:52:44.266733] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.793 [2024-12-05 12:52:44.274699] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:01.793 [2024-12-05 12:52:44.274827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:01.793 [2024-12-05 12:52:44.274950] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:01.793 [2024-12-05 12:52:44.275029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:01.793 [2024-12-05 12:52:44.275077] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:01.793 [2024-12-05 12:52:44.275105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.793 [2024-12-05 12:52:44.307335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:01.793 BaseBdev1 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.793 [ 00:21:01.793 { 00:21:01.793 "name": "BaseBdev1", 00:21:01.793 "aliases": [ 00:21:01.793 "dddfcca7-94f3-42fc-80dd-536da39f4769" 00:21:01.793 ], 00:21:01.793 "product_name": "Malloc disk", 00:21:01.793 "block_size": 512, 00:21:01.793 "num_blocks": 65536, 00:21:01.793 "uuid": "dddfcca7-94f3-42fc-80dd-536da39f4769", 00:21:01.793 "assigned_rate_limits": { 00:21:01.793 "rw_ios_per_sec": 0, 00:21:01.793 "rw_mbytes_per_sec": 0, 00:21:01.793 "r_mbytes_per_sec": 0, 00:21:01.793 "w_mbytes_per_sec": 0 00:21:01.793 }, 00:21:01.793 "claimed": true, 00:21:01.793 "claim_type": "exclusive_write", 00:21:01.793 "zoned": false, 00:21:01.793 "supported_io_types": { 00:21:01.793 "read": true, 00:21:01.793 "write": true, 00:21:01.793 "unmap": true, 00:21:01.793 "flush": true, 00:21:01.793 "reset": true, 00:21:01.793 "nvme_admin": false, 00:21:01.793 "nvme_io": false, 00:21:01.793 "nvme_io_md": false, 00:21:01.793 "write_zeroes": true, 00:21:01.793 "zcopy": true, 00:21:01.793 "get_zone_info": false, 00:21:01.793 "zone_management": false, 00:21:01.793 "zone_append": false, 00:21:01.793 "compare": false, 00:21:01.793 "compare_and_write": false, 00:21:01.793 "abort": true, 00:21:01.793 "seek_hole": false, 00:21:01.793 "seek_data": false, 00:21:01.793 "copy": true, 00:21:01.793 "nvme_iov_md": false 00:21:01.793 }, 00:21:01.793 "memory_domains": [ 00:21:01.793 { 00:21:01.793 "dma_device_id": "system", 00:21:01.793 "dma_device_type": 1 00:21:01.793 }, 00:21:01.793 { 00:21:01.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:01.793 "dma_device_type": 2 00:21:01.793 } 00:21:01.793 ], 00:21:01.793 "driver_specific": {} 00:21:01.793 } 00:21:01.793 ] 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.793 12:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:01.793 "name": "Existed_Raid", 00:21:01.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.793 "strip_size_kb": 0, 00:21:01.793 "state": "configuring", 00:21:01.793 "raid_level": "raid1", 00:21:01.793 "superblock": false, 00:21:01.793 "num_base_bdevs": 3, 00:21:01.793 "num_base_bdevs_discovered": 1, 00:21:01.793 "num_base_bdevs_operational": 3, 00:21:01.793 "base_bdevs_list": [ 00:21:01.793 { 00:21:01.793 "name": "BaseBdev1", 00:21:01.793 "uuid": "dddfcca7-94f3-42fc-80dd-536da39f4769", 00:21:01.793 "is_configured": true, 00:21:01.793 "data_offset": 0, 00:21:01.793 "data_size": 65536 00:21:01.793 }, 00:21:01.793 { 00:21:01.793 "name": "BaseBdev2", 00:21:01.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.794 "is_configured": false, 00:21:01.794 "data_offset": 0, 00:21:01.794 "data_size": 0 00:21:01.794 }, 00:21:01.794 { 00:21:01.794 "name": "BaseBdev3", 00:21:01.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.794 "is_configured": false, 00:21:01.794 "data_offset": 0, 00:21:01.794 "data_size": 0 00:21:01.794 } 00:21:01.794 ] 00:21:01.794 }' 00:21:01.794 12:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:01.794 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.363 12:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:02.363 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.364 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.364 [2024-12-05 12:52:44.683464] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:02.364 [2024-12-05 12:52:44.683522] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:02.364 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.364 12:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:02.364 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.364 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.364 [2024-12-05 12:52:44.691514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:02.364 [2024-12-05 12:52:44.693459] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:02.364 [2024-12-05 12:52:44.693595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:02.364 [2024-12-05 12:52:44.693657] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:02.364 [2024-12-05 12:52:44.693684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:02.364 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.364 12:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:02.364 12:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:02.364 12:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:02.364 12:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:02.364 12:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:02.364 12:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:02.364 12:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:02.364 12:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:02.364 12:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:02.364 12:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:02.364 12:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:02.364 12:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:02.364 12:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.364 12:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:02.364 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.364 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.364 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.364 12:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:02.364 "name": "Existed_Raid", 00:21:02.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.364 "strip_size_kb": 0, 00:21:02.364 "state": "configuring", 00:21:02.364 "raid_level": "raid1", 00:21:02.364 "superblock": false, 00:21:02.364 "num_base_bdevs": 3, 00:21:02.364 "num_base_bdevs_discovered": 1, 00:21:02.364 "num_base_bdevs_operational": 3, 00:21:02.364 "base_bdevs_list": [ 00:21:02.364 { 00:21:02.364 "name": "BaseBdev1", 00:21:02.364 "uuid": "dddfcca7-94f3-42fc-80dd-536da39f4769", 00:21:02.364 "is_configured": true, 00:21:02.364 "data_offset": 0, 00:21:02.364 "data_size": 65536 00:21:02.364 }, 00:21:02.364 { 00:21:02.364 "name": "BaseBdev2", 00:21:02.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.364 "is_configured": false, 00:21:02.364 "data_offset": 0, 00:21:02.364 "data_size": 0 00:21:02.364 }, 00:21:02.364 { 00:21:02.364 "name": "BaseBdev3", 00:21:02.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.364 "is_configured": false, 00:21:02.364 "data_offset": 0, 00:21:02.364 "data_size": 0 00:21:02.364 } 00:21:02.364 ] 00:21:02.364 }' 00:21:02.364 12:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:02.364 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.626 12:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:02.626 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.626 12:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.626 [2024-12-05 12:52:45.006252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:02.626 BaseBdev2 00:21:02.626 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.626 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:02.626 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:02.626 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:02.626 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:02.626 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:02.626 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:02.626 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:02.626 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.626 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.626 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.626 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:02.626 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.626 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.626 [ 00:21:02.626 { 00:21:02.626 "name": "BaseBdev2", 00:21:02.626 "aliases": [ 00:21:02.626 "e1c0f63a-f536-499e-9116-e914004f839e" 00:21:02.626 ], 00:21:02.626 "product_name": "Malloc disk", 00:21:02.626 "block_size": 512, 00:21:02.626 "num_blocks": 65536, 00:21:02.626 "uuid": "e1c0f63a-f536-499e-9116-e914004f839e", 00:21:02.626 "assigned_rate_limits": { 00:21:02.626 "rw_ios_per_sec": 0, 00:21:02.626 "rw_mbytes_per_sec": 0, 00:21:02.626 "r_mbytes_per_sec": 0, 00:21:02.626 "w_mbytes_per_sec": 0 00:21:02.626 }, 00:21:02.626 "claimed": true, 00:21:02.626 "claim_type": "exclusive_write", 00:21:02.626 "zoned": false, 00:21:02.626 "supported_io_types": { 00:21:02.626 "read": true, 00:21:02.626 "write": true, 00:21:02.626 "unmap": true, 00:21:02.626 "flush": true, 00:21:02.626 "reset": true, 00:21:02.626 "nvme_admin": false, 00:21:02.626 "nvme_io": false, 00:21:02.626 "nvme_io_md": false, 00:21:02.626 "write_zeroes": true, 00:21:02.626 "zcopy": true, 00:21:02.626 "get_zone_info": false, 00:21:02.626 "zone_management": false, 00:21:02.626 "zone_append": false, 00:21:02.626 "compare": false, 00:21:02.626 "compare_and_write": false, 00:21:02.626 "abort": true, 00:21:02.626 "seek_hole": false, 00:21:02.626 "seek_data": false, 00:21:02.626 "copy": true, 00:21:02.626 "nvme_iov_md": false 00:21:02.626 }, 00:21:02.626 "memory_domains": [ 00:21:02.626 { 00:21:02.626 "dma_device_id": "system", 00:21:02.626 "dma_device_type": 1 00:21:02.626 }, 00:21:02.626 { 00:21:02.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:02.626 "dma_device_type": 2 00:21:02.626 } 00:21:02.626 ], 00:21:02.626 "driver_specific": {} 00:21:02.626 } 00:21:02.626 ] 00:21:02.626 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.626 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:02.626 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:02.626 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:02.626 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:02.626 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:02.626 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:02.626 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:02.626 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:02.626 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:02.626 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:02.626 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:02.626 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:02.626 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:02.626 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.626 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:02.626 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.626 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.626 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.626 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:02.626 "name": "Existed_Raid", 00:21:02.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.626 "strip_size_kb": 0, 00:21:02.626 "state": "configuring", 00:21:02.626 "raid_level": "raid1", 00:21:02.626 "superblock": false, 00:21:02.626 "num_base_bdevs": 3, 00:21:02.626 "num_base_bdevs_discovered": 2, 00:21:02.626 "num_base_bdevs_operational": 3, 00:21:02.626 "base_bdevs_list": [ 00:21:02.626 { 00:21:02.626 "name": "BaseBdev1", 00:21:02.626 "uuid": "dddfcca7-94f3-42fc-80dd-536da39f4769", 00:21:02.626 "is_configured": true, 00:21:02.626 "data_offset": 0, 00:21:02.626 "data_size": 65536 00:21:02.626 }, 00:21:02.626 { 00:21:02.626 "name": "BaseBdev2", 00:21:02.626 "uuid": "e1c0f63a-f536-499e-9116-e914004f839e", 00:21:02.626 "is_configured": true, 00:21:02.626 "data_offset": 0, 00:21:02.626 "data_size": 65536 00:21:02.626 }, 00:21:02.626 { 00:21:02.626 "name": "BaseBdev3", 00:21:02.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.626 "is_configured": false, 00:21:02.626 "data_offset": 0, 00:21:02.626 "data_size": 0 00:21:02.626 } 00:21:02.626 ] 00:21:02.626 }' 00:21:02.626 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:02.626 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.885 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:02.885 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.885 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.885 [2024-12-05 12:52:45.409347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:02.885 [2024-12-05 12:52:45.409394] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:02.885 [2024-12-05 12:52:45.409406] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:02.885 BaseBdev3 00:21:02.885 [2024-12-05 12:52:45.409701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:02.885 [2024-12-05 12:52:45.409858] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:02.885 [2024-12-05 12:52:45.409868] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:02.885 [2024-12-05 12:52:45.410130] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:02.885 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.885 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:21:02.885 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:02.885 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:02.885 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:02.885 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:02.885 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:02.885 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:02.885 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.885 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.885 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.885 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:02.885 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.885 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.885 [ 00:21:02.885 { 00:21:02.885 "name": "BaseBdev3", 00:21:02.885 "aliases": [ 00:21:02.885 "1c75a238-8647-4f1f-84c0-077d38535143" 00:21:02.885 ], 00:21:02.885 "product_name": "Malloc disk", 00:21:02.885 "block_size": 512, 00:21:02.885 "num_blocks": 65536, 00:21:02.885 "uuid": "1c75a238-8647-4f1f-84c0-077d38535143", 00:21:02.885 "assigned_rate_limits": { 00:21:02.885 "rw_ios_per_sec": 0, 00:21:02.885 "rw_mbytes_per_sec": 0, 00:21:02.885 "r_mbytes_per_sec": 0, 00:21:02.885 "w_mbytes_per_sec": 0 00:21:02.885 }, 00:21:02.885 "claimed": true, 00:21:02.885 "claim_type": "exclusive_write", 00:21:02.885 "zoned": false, 00:21:02.885 "supported_io_types": { 00:21:02.885 "read": true, 00:21:02.885 "write": true, 00:21:02.885 "unmap": true, 00:21:02.885 "flush": true, 00:21:02.885 "reset": true, 00:21:02.886 "nvme_admin": false, 00:21:02.886 "nvme_io": false, 00:21:02.886 "nvme_io_md": false, 00:21:02.886 "write_zeroes": true, 00:21:02.886 "zcopy": true, 00:21:02.886 "get_zone_info": false, 00:21:02.886 "zone_management": false, 00:21:02.886 "zone_append": false, 00:21:02.886 "compare": false, 00:21:02.886 "compare_and_write": false, 00:21:02.886 "abort": true, 00:21:02.886 "seek_hole": false, 00:21:02.886 "seek_data": false, 00:21:02.886 "copy": true, 00:21:02.886 "nvme_iov_md": false 00:21:02.886 }, 00:21:02.886 "memory_domains": [ 00:21:02.886 { 00:21:02.886 "dma_device_id": "system", 00:21:02.886 "dma_device_type": 1 00:21:02.886 }, 00:21:02.886 { 00:21:02.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:02.886 "dma_device_type": 2 00:21:02.886 } 00:21:02.886 ], 00:21:02.886 "driver_specific": {} 00:21:02.886 } 00:21:02.886 ] 00:21:02.886 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.886 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:02.886 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:02.886 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:02.886 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:21:02.886 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:02.886 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:02.886 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:02.886 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:02.886 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:02.886 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:02.886 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:02.886 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:02.886 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:02.886 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.886 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.886 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:02.886 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.886 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.146 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:03.146 "name": "Existed_Raid", 00:21:03.146 "uuid": "25fd5e13-7b95-4e48-81aa-3e493c67fdcd", 00:21:03.146 "strip_size_kb": 0, 00:21:03.146 "state": "online", 00:21:03.146 "raid_level": "raid1", 00:21:03.146 "superblock": false, 00:21:03.146 "num_base_bdevs": 3, 00:21:03.146 "num_base_bdevs_discovered": 3, 00:21:03.146 "num_base_bdevs_operational": 3, 00:21:03.146 "base_bdevs_list": [ 00:21:03.146 { 00:21:03.146 "name": "BaseBdev1", 00:21:03.146 "uuid": "dddfcca7-94f3-42fc-80dd-536da39f4769", 00:21:03.146 "is_configured": true, 00:21:03.146 "data_offset": 0, 00:21:03.146 "data_size": 65536 00:21:03.146 }, 00:21:03.146 { 00:21:03.146 "name": "BaseBdev2", 00:21:03.146 "uuid": "e1c0f63a-f536-499e-9116-e914004f839e", 00:21:03.146 "is_configured": true, 00:21:03.146 "data_offset": 0, 00:21:03.146 "data_size": 65536 00:21:03.146 }, 00:21:03.146 { 00:21:03.146 "name": "BaseBdev3", 00:21:03.146 "uuid": "1c75a238-8647-4f1f-84c0-077d38535143", 00:21:03.146 "is_configured": true, 00:21:03.146 "data_offset": 0, 00:21:03.146 "data_size": 65536 00:21:03.146 } 00:21:03.146 ] 00:21:03.146 }' 00:21:03.146 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:03.146 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.406 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:03.406 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:03.406 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:03.406 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:03.406 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:03.406 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:03.406 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:03.406 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:03.406 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.406 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.406 [2024-12-05 12:52:45.773820] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:03.406 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.406 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:03.406 "name": "Existed_Raid", 00:21:03.406 "aliases": [ 00:21:03.406 "25fd5e13-7b95-4e48-81aa-3e493c67fdcd" 00:21:03.406 ], 00:21:03.406 "product_name": "Raid Volume", 00:21:03.406 "block_size": 512, 00:21:03.406 "num_blocks": 65536, 00:21:03.406 "uuid": "25fd5e13-7b95-4e48-81aa-3e493c67fdcd", 00:21:03.406 "assigned_rate_limits": { 00:21:03.406 "rw_ios_per_sec": 0, 00:21:03.406 "rw_mbytes_per_sec": 0, 00:21:03.406 "r_mbytes_per_sec": 0, 00:21:03.406 "w_mbytes_per_sec": 0 00:21:03.406 }, 00:21:03.406 "claimed": false, 00:21:03.406 "zoned": false, 00:21:03.406 "supported_io_types": { 00:21:03.406 "read": true, 00:21:03.406 "write": true, 00:21:03.406 "unmap": false, 00:21:03.406 "flush": false, 00:21:03.406 "reset": true, 00:21:03.406 "nvme_admin": false, 00:21:03.406 "nvme_io": false, 00:21:03.406 "nvme_io_md": false, 00:21:03.406 "write_zeroes": true, 00:21:03.406 "zcopy": false, 00:21:03.406 "get_zone_info": false, 00:21:03.406 "zone_management": false, 00:21:03.406 "zone_append": false, 00:21:03.406 "compare": false, 00:21:03.406 "compare_and_write": false, 00:21:03.406 "abort": false, 00:21:03.406 "seek_hole": false, 00:21:03.406 "seek_data": false, 00:21:03.406 "copy": false, 00:21:03.406 "nvme_iov_md": false 00:21:03.406 }, 00:21:03.406 "memory_domains": [ 00:21:03.406 { 00:21:03.406 "dma_device_id": "system", 00:21:03.406 "dma_device_type": 1 00:21:03.406 }, 00:21:03.406 { 00:21:03.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:03.406 "dma_device_type": 2 00:21:03.406 }, 00:21:03.406 { 00:21:03.406 "dma_device_id": "system", 00:21:03.406 "dma_device_type": 1 00:21:03.406 }, 00:21:03.406 { 00:21:03.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:03.406 "dma_device_type": 2 00:21:03.406 }, 00:21:03.406 { 00:21:03.406 "dma_device_id": "system", 00:21:03.406 "dma_device_type": 1 00:21:03.406 }, 00:21:03.406 { 00:21:03.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:03.406 "dma_device_type": 2 00:21:03.406 } 00:21:03.406 ], 00:21:03.406 "driver_specific": { 00:21:03.406 "raid": { 00:21:03.406 "uuid": "25fd5e13-7b95-4e48-81aa-3e493c67fdcd", 00:21:03.406 "strip_size_kb": 0, 00:21:03.406 "state": "online", 00:21:03.406 "raid_level": "raid1", 00:21:03.406 "superblock": false, 00:21:03.406 "num_base_bdevs": 3, 00:21:03.406 "num_base_bdevs_discovered": 3, 00:21:03.406 "num_base_bdevs_operational": 3, 00:21:03.406 "base_bdevs_list": [ 00:21:03.406 { 00:21:03.406 "name": "BaseBdev1", 00:21:03.406 "uuid": "dddfcca7-94f3-42fc-80dd-536da39f4769", 00:21:03.406 "is_configured": true, 00:21:03.406 "data_offset": 0, 00:21:03.406 "data_size": 65536 00:21:03.406 }, 00:21:03.406 { 00:21:03.406 "name": "BaseBdev2", 00:21:03.406 "uuid": "e1c0f63a-f536-499e-9116-e914004f839e", 00:21:03.406 "is_configured": true, 00:21:03.406 "data_offset": 0, 00:21:03.406 "data_size": 65536 00:21:03.406 }, 00:21:03.406 { 00:21:03.406 "name": "BaseBdev3", 00:21:03.406 "uuid": "1c75a238-8647-4f1f-84c0-077d38535143", 00:21:03.406 "is_configured": true, 00:21:03.406 "data_offset": 0, 00:21:03.406 "data_size": 65536 00:21:03.406 } 00:21:03.406 ] 00:21:03.406 } 00:21:03.406 } 00:21:03.406 }' 00:21:03.406 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:03.406 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:03.406 BaseBdev2 00:21:03.406 BaseBdev3' 00:21:03.406 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:03.406 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:03.406 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:03.406 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:03.406 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:03.406 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.406 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.406 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.406 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:03.406 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:03.406 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:03.406 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:03.406 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.406 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.406 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:03.406 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.406 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:03.406 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:03.406 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:03.406 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:03.406 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:03.406 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.406 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.406 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.407 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:03.407 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:03.407 12:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:03.407 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.407 12:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.665 [2024-12-05 12:52:45.989577] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:03.665 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.665 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:03.665 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:21:03.665 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:03.665 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:21:03.665 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:03.665 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:03.665 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:03.665 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:03.665 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:03.665 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:03.665 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:03.665 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:03.665 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:03.665 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:03.665 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:03.665 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.665 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:03.665 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.665 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.665 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.665 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:03.665 "name": "Existed_Raid", 00:21:03.665 "uuid": "25fd5e13-7b95-4e48-81aa-3e493c67fdcd", 00:21:03.665 "strip_size_kb": 0, 00:21:03.665 "state": "online", 00:21:03.665 "raid_level": "raid1", 00:21:03.665 "superblock": false, 00:21:03.665 "num_base_bdevs": 3, 00:21:03.665 "num_base_bdevs_discovered": 2, 00:21:03.665 "num_base_bdevs_operational": 2, 00:21:03.665 "base_bdevs_list": [ 00:21:03.665 { 00:21:03.665 "name": null, 00:21:03.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.665 "is_configured": false, 00:21:03.665 "data_offset": 0, 00:21:03.665 "data_size": 65536 00:21:03.665 }, 00:21:03.665 { 00:21:03.665 "name": "BaseBdev2", 00:21:03.665 "uuid": "e1c0f63a-f536-499e-9116-e914004f839e", 00:21:03.665 "is_configured": true, 00:21:03.665 "data_offset": 0, 00:21:03.665 "data_size": 65536 00:21:03.665 }, 00:21:03.665 { 00:21:03.666 "name": "BaseBdev3", 00:21:03.666 "uuid": "1c75a238-8647-4f1f-84c0-077d38535143", 00:21:03.666 "is_configured": true, 00:21:03.666 "data_offset": 0, 00:21:03.666 "data_size": 65536 00:21:03.666 } 00:21:03.666 ] 00:21:03.666 }' 00:21:03.666 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:03.666 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.924 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:03.924 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:03.924 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.924 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:03.924 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.924 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.924 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.924 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:03.924 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:03.924 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:03.924 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.924 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.924 [2024-12-05 12:52:46.412073] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:03.924 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.924 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:03.924 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:03.924 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.924 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.924 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.924 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:03.924 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.924 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:03.925 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:03.925 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:21:03.925 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.925 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.925 [2024-12-05 12:52:46.507394] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:03.925 [2024-12-05 12:52:46.507650] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:04.185 [2024-12-05 12:52:46.566963] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:04.185 [2024-12-05 12:52:46.567160] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:04.185 [2024-12-05 12:52:46.567239] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.185 BaseBdev2 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.185 [ 00:21:04.185 { 00:21:04.185 "name": "BaseBdev2", 00:21:04.185 "aliases": [ 00:21:04.185 "b6afe122-26b2-424b-9717-934692ef9bed" 00:21:04.185 ], 00:21:04.185 "product_name": "Malloc disk", 00:21:04.185 "block_size": 512, 00:21:04.185 "num_blocks": 65536, 00:21:04.185 "uuid": "b6afe122-26b2-424b-9717-934692ef9bed", 00:21:04.185 "assigned_rate_limits": { 00:21:04.185 "rw_ios_per_sec": 0, 00:21:04.185 "rw_mbytes_per_sec": 0, 00:21:04.185 "r_mbytes_per_sec": 0, 00:21:04.185 "w_mbytes_per_sec": 0 00:21:04.185 }, 00:21:04.185 "claimed": false, 00:21:04.185 "zoned": false, 00:21:04.185 "supported_io_types": { 00:21:04.185 "read": true, 00:21:04.185 "write": true, 00:21:04.185 "unmap": true, 00:21:04.185 "flush": true, 00:21:04.185 "reset": true, 00:21:04.185 "nvme_admin": false, 00:21:04.185 "nvme_io": false, 00:21:04.185 "nvme_io_md": false, 00:21:04.185 "write_zeroes": true, 00:21:04.185 "zcopy": true, 00:21:04.185 "get_zone_info": false, 00:21:04.185 "zone_management": false, 00:21:04.185 "zone_append": false, 00:21:04.185 "compare": false, 00:21:04.185 "compare_and_write": false, 00:21:04.185 "abort": true, 00:21:04.185 "seek_hole": false, 00:21:04.185 "seek_data": false, 00:21:04.185 "copy": true, 00:21:04.185 "nvme_iov_md": false 00:21:04.185 }, 00:21:04.185 "memory_domains": [ 00:21:04.185 { 00:21:04.185 "dma_device_id": "system", 00:21:04.185 "dma_device_type": 1 00:21:04.185 }, 00:21:04.185 { 00:21:04.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.185 "dma_device_type": 2 00:21:04.185 } 00:21:04.185 ], 00:21:04.185 "driver_specific": {} 00:21:04.185 } 00:21:04.185 ] 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.185 BaseBdev3 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.185 [ 00:21:04.185 { 00:21:04.185 "name": "BaseBdev3", 00:21:04.185 "aliases": [ 00:21:04.185 "7dcfa645-3bb8-4b72-9ba2-58f59007c01e" 00:21:04.185 ], 00:21:04.185 "product_name": "Malloc disk", 00:21:04.185 "block_size": 512, 00:21:04.185 "num_blocks": 65536, 00:21:04.185 "uuid": "7dcfa645-3bb8-4b72-9ba2-58f59007c01e", 00:21:04.185 "assigned_rate_limits": { 00:21:04.185 "rw_ios_per_sec": 0, 00:21:04.185 "rw_mbytes_per_sec": 0, 00:21:04.185 "r_mbytes_per_sec": 0, 00:21:04.185 "w_mbytes_per_sec": 0 00:21:04.185 }, 00:21:04.185 "claimed": false, 00:21:04.185 "zoned": false, 00:21:04.185 "supported_io_types": { 00:21:04.185 "read": true, 00:21:04.185 "write": true, 00:21:04.185 "unmap": true, 00:21:04.185 "flush": true, 00:21:04.185 "reset": true, 00:21:04.185 "nvme_admin": false, 00:21:04.185 "nvme_io": false, 00:21:04.185 "nvme_io_md": false, 00:21:04.185 "write_zeroes": true, 00:21:04.185 "zcopy": true, 00:21:04.185 "get_zone_info": false, 00:21:04.185 "zone_management": false, 00:21:04.185 "zone_append": false, 00:21:04.185 "compare": false, 00:21:04.185 "compare_and_write": false, 00:21:04.185 "abort": true, 00:21:04.185 "seek_hole": false, 00:21:04.185 "seek_data": false, 00:21:04.185 "copy": true, 00:21:04.185 "nvme_iov_md": false 00:21:04.185 }, 00:21:04.185 "memory_domains": [ 00:21:04.185 { 00:21:04.185 "dma_device_id": "system", 00:21:04.185 "dma_device_type": 1 00:21:04.185 }, 00:21:04.185 { 00:21:04.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.185 "dma_device_type": 2 00:21:04.185 } 00:21:04.185 ], 00:21:04.185 "driver_specific": {} 00:21:04.185 } 00:21:04.185 ] 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.185 [2024-12-05 12:52:46.727441] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:04.185 [2024-12-05 12:52:46.727625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:04.185 [2024-12-05 12:52:46.727653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:04.185 [2024-12-05 12:52:46.729517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:04.185 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.186 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:04.186 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:04.186 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:04.186 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:04.186 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:04.186 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:04.186 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:04.186 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:04.186 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:04.186 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:04.186 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:04.186 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.186 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.186 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.186 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.186 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:04.186 "name": "Existed_Raid", 00:21:04.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.186 "strip_size_kb": 0, 00:21:04.186 "state": "configuring", 00:21:04.186 "raid_level": "raid1", 00:21:04.186 "superblock": false, 00:21:04.186 "num_base_bdevs": 3, 00:21:04.186 "num_base_bdevs_discovered": 2, 00:21:04.186 "num_base_bdevs_operational": 3, 00:21:04.186 "base_bdevs_list": [ 00:21:04.186 { 00:21:04.186 "name": "BaseBdev1", 00:21:04.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.186 "is_configured": false, 00:21:04.186 "data_offset": 0, 00:21:04.186 "data_size": 0 00:21:04.186 }, 00:21:04.186 { 00:21:04.186 "name": "BaseBdev2", 00:21:04.186 "uuid": "b6afe122-26b2-424b-9717-934692ef9bed", 00:21:04.186 "is_configured": true, 00:21:04.186 "data_offset": 0, 00:21:04.186 "data_size": 65536 00:21:04.186 }, 00:21:04.186 { 00:21:04.186 "name": "BaseBdev3", 00:21:04.186 "uuid": "7dcfa645-3bb8-4b72-9ba2-58f59007c01e", 00:21:04.186 "is_configured": true, 00:21:04.186 "data_offset": 0, 00:21:04.186 "data_size": 65536 00:21:04.186 } 00:21:04.186 ] 00:21:04.186 }' 00:21:04.186 12:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:04.186 12:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.756 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:04.756 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.756 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.756 [2024-12-05 12:52:47.051543] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:04.756 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.756 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:04.756 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:04.756 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:04.756 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:04.756 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:04.756 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:04.756 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:04.756 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:04.756 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:04.756 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:04.756 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.756 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.756 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.756 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:04.756 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.756 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:04.756 "name": "Existed_Raid", 00:21:04.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.756 "strip_size_kb": 0, 00:21:04.756 "state": "configuring", 00:21:04.756 "raid_level": "raid1", 00:21:04.756 "superblock": false, 00:21:04.756 "num_base_bdevs": 3, 00:21:04.756 "num_base_bdevs_discovered": 1, 00:21:04.756 "num_base_bdevs_operational": 3, 00:21:04.756 "base_bdevs_list": [ 00:21:04.756 { 00:21:04.756 "name": "BaseBdev1", 00:21:04.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.756 "is_configured": false, 00:21:04.756 "data_offset": 0, 00:21:04.756 "data_size": 0 00:21:04.756 }, 00:21:04.756 { 00:21:04.756 "name": null, 00:21:04.756 "uuid": "b6afe122-26b2-424b-9717-934692ef9bed", 00:21:04.756 "is_configured": false, 00:21:04.756 "data_offset": 0, 00:21:04.756 "data_size": 65536 00:21:04.756 }, 00:21:04.756 { 00:21:04.756 "name": "BaseBdev3", 00:21:04.756 "uuid": "7dcfa645-3bb8-4b72-9ba2-58f59007c01e", 00:21:04.756 "is_configured": true, 00:21:04.756 "data_offset": 0, 00:21:04.756 "data_size": 65536 00:21:04.756 } 00:21:04.756 ] 00:21:04.756 }' 00:21:04.756 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:04.756 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.017 [2024-12-05 12:52:47.458021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:05.017 BaseBdev1 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.017 [ 00:21:05.017 { 00:21:05.017 "name": "BaseBdev1", 00:21:05.017 "aliases": [ 00:21:05.017 "d34b4096-78d9-4571-9df7-6da76231caae" 00:21:05.017 ], 00:21:05.017 "product_name": "Malloc disk", 00:21:05.017 "block_size": 512, 00:21:05.017 "num_blocks": 65536, 00:21:05.017 "uuid": "d34b4096-78d9-4571-9df7-6da76231caae", 00:21:05.017 "assigned_rate_limits": { 00:21:05.017 "rw_ios_per_sec": 0, 00:21:05.017 "rw_mbytes_per_sec": 0, 00:21:05.017 "r_mbytes_per_sec": 0, 00:21:05.017 "w_mbytes_per_sec": 0 00:21:05.017 }, 00:21:05.017 "claimed": true, 00:21:05.017 "claim_type": "exclusive_write", 00:21:05.017 "zoned": false, 00:21:05.017 "supported_io_types": { 00:21:05.017 "read": true, 00:21:05.017 "write": true, 00:21:05.017 "unmap": true, 00:21:05.017 "flush": true, 00:21:05.017 "reset": true, 00:21:05.017 "nvme_admin": false, 00:21:05.017 "nvme_io": false, 00:21:05.017 "nvme_io_md": false, 00:21:05.017 "write_zeroes": true, 00:21:05.017 "zcopy": true, 00:21:05.017 "get_zone_info": false, 00:21:05.017 "zone_management": false, 00:21:05.017 "zone_append": false, 00:21:05.017 "compare": false, 00:21:05.017 "compare_and_write": false, 00:21:05.017 "abort": true, 00:21:05.017 "seek_hole": false, 00:21:05.017 "seek_data": false, 00:21:05.017 "copy": true, 00:21:05.017 "nvme_iov_md": false 00:21:05.017 }, 00:21:05.017 "memory_domains": [ 00:21:05.017 { 00:21:05.017 "dma_device_id": "system", 00:21:05.017 "dma_device_type": 1 00:21:05.017 }, 00:21:05.017 { 00:21:05.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:05.017 "dma_device_type": 2 00:21:05.017 } 00:21:05.017 ], 00:21:05.017 "driver_specific": {} 00:21:05.017 } 00:21:05.017 ] 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:05.017 "name": "Existed_Raid", 00:21:05.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.017 "strip_size_kb": 0, 00:21:05.017 "state": "configuring", 00:21:05.017 "raid_level": "raid1", 00:21:05.017 "superblock": false, 00:21:05.017 "num_base_bdevs": 3, 00:21:05.017 "num_base_bdevs_discovered": 2, 00:21:05.017 "num_base_bdevs_operational": 3, 00:21:05.017 "base_bdevs_list": [ 00:21:05.017 { 00:21:05.017 "name": "BaseBdev1", 00:21:05.017 "uuid": "d34b4096-78d9-4571-9df7-6da76231caae", 00:21:05.017 "is_configured": true, 00:21:05.017 "data_offset": 0, 00:21:05.017 "data_size": 65536 00:21:05.017 }, 00:21:05.017 { 00:21:05.017 "name": null, 00:21:05.017 "uuid": "b6afe122-26b2-424b-9717-934692ef9bed", 00:21:05.017 "is_configured": false, 00:21:05.017 "data_offset": 0, 00:21:05.017 "data_size": 65536 00:21:05.017 }, 00:21:05.017 { 00:21:05.017 "name": "BaseBdev3", 00:21:05.017 "uuid": "7dcfa645-3bb8-4b72-9ba2-58f59007c01e", 00:21:05.017 "is_configured": true, 00:21:05.017 "data_offset": 0, 00:21:05.017 "data_size": 65536 00:21:05.017 } 00:21:05.017 ] 00:21:05.017 }' 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:05.017 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.277 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.277 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.277 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.277 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:05.277 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.277 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:21:05.277 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:21:05.277 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.277 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.277 [2024-12-05 12:52:47.842154] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:05.277 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.277 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:05.277 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:05.277 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:05.277 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:05.277 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:05.277 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:05.277 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:05.277 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:05.277 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:05.277 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:05.277 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:05.277 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.277 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.277 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.538 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.538 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:05.538 "name": "Existed_Raid", 00:21:05.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.538 "strip_size_kb": 0, 00:21:05.538 "state": "configuring", 00:21:05.538 "raid_level": "raid1", 00:21:05.538 "superblock": false, 00:21:05.538 "num_base_bdevs": 3, 00:21:05.538 "num_base_bdevs_discovered": 1, 00:21:05.538 "num_base_bdevs_operational": 3, 00:21:05.538 "base_bdevs_list": [ 00:21:05.538 { 00:21:05.538 "name": "BaseBdev1", 00:21:05.538 "uuid": "d34b4096-78d9-4571-9df7-6da76231caae", 00:21:05.538 "is_configured": true, 00:21:05.538 "data_offset": 0, 00:21:05.538 "data_size": 65536 00:21:05.538 }, 00:21:05.538 { 00:21:05.538 "name": null, 00:21:05.538 "uuid": "b6afe122-26b2-424b-9717-934692ef9bed", 00:21:05.538 "is_configured": false, 00:21:05.538 "data_offset": 0, 00:21:05.538 "data_size": 65536 00:21:05.538 }, 00:21:05.538 { 00:21:05.538 "name": null, 00:21:05.538 "uuid": "7dcfa645-3bb8-4b72-9ba2-58f59007c01e", 00:21:05.538 "is_configured": false, 00:21:05.538 "data_offset": 0, 00:21:05.538 "data_size": 65536 00:21:05.538 } 00:21:05.538 ] 00:21:05.538 }' 00:21:05.538 12:52:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:05.538 12:52:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.876 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.876 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.876 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.876 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:05.876 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.876 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:21:05.876 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:05.876 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.876 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.876 [2024-12-05 12:52:48.238264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:05.876 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.876 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:05.876 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:05.876 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:05.876 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:05.876 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:05.876 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:05.876 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:05.876 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:05.876 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:05.876 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:05.876 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.876 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.876 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.876 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:05.876 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.876 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:05.876 "name": "Existed_Raid", 00:21:05.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.876 "strip_size_kb": 0, 00:21:05.876 "state": "configuring", 00:21:05.876 "raid_level": "raid1", 00:21:05.876 "superblock": false, 00:21:05.876 "num_base_bdevs": 3, 00:21:05.876 "num_base_bdevs_discovered": 2, 00:21:05.876 "num_base_bdevs_operational": 3, 00:21:05.876 "base_bdevs_list": [ 00:21:05.876 { 00:21:05.876 "name": "BaseBdev1", 00:21:05.876 "uuid": "d34b4096-78d9-4571-9df7-6da76231caae", 00:21:05.876 "is_configured": true, 00:21:05.876 "data_offset": 0, 00:21:05.876 "data_size": 65536 00:21:05.876 }, 00:21:05.876 { 00:21:05.876 "name": null, 00:21:05.876 "uuid": "b6afe122-26b2-424b-9717-934692ef9bed", 00:21:05.876 "is_configured": false, 00:21:05.876 "data_offset": 0, 00:21:05.876 "data_size": 65536 00:21:05.876 }, 00:21:05.876 { 00:21:05.877 "name": "BaseBdev3", 00:21:05.877 "uuid": "7dcfa645-3bb8-4b72-9ba2-58f59007c01e", 00:21:05.877 "is_configured": true, 00:21:05.877 "data_offset": 0, 00:21:05.877 "data_size": 65536 00:21:05.877 } 00:21:05.877 ] 00:21:05.877 }' 00:21:05.877 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:05.877 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.138 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.138 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.138 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.138 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:06.138 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.138 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:21:06.138 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:06.138 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.138 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.138 [2024-12-05 12:52:48.626328] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:06.138 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.138 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:06.138 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:06.138 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:06.138 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:06.138 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:06.138 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:06.138 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:06.138 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:06.138 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:06.138 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:06.138 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:06.138 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.138 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.138 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.138 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.399 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:06.399 "name": "Existed_Raid", 00:21:06.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.399 "strip_size_kb": 0, 00:21:06.399 "state": "configuring", 00:21:06.399 "raid_level": "raid1", 00:21:06.399 "superblock": false, 00:21:06.399 "num_base_bdevs": 3, 00:21:06.399 "num_base_bdevs_discovered": 1, 00:21:06.399 "num_base_bdevs_operational": 3, 00:21:06.399 "base_bdevs_list": [ 00:21:06.399 { 00:21:06.399 "name": null, 00:21:06.399 "uuid": "d34b4096-78d9-4571-9df7-6da76231caae", 00:21:06.399 "is_configured": false, 00:21:06.399 "data_offset": 0, 00:21:06.399 "data_size": 65536 00:21:06.399 }, 00:21:06.399 { 00:21:06.399 "name": null, 00:21:06.399 "uuid": "b6afe122-26b2-424b-9717-934692ef9bed", 00:21:06.399 "is_configured": false, 00:21:06.399 "data_offset": 0, 00:21:06.399 "data_size": 65536 00:21:06.399 }, 00:21:06.399 { 00:21:06.399 "name": "BaseBdev3", 00:21:06.399 "uuid": "7dcfa645-3bb8-4b72-9ba2-58f59007c01e", 00:21:06.399 "is_configured": true, 00:21:06.399 "data_offset": 0, 00:21:06.399 "data_size": 65536 00:21:06.399 } 00:21:06.399 ] 00:21:06.399 }' 00:21:06.399 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:06.399 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.661 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.661 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.661 12:52:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.661 12:52:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:06.661 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.661 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:21:06.661 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:06.661 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.661 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.661 [2024-12-05 12:52:49.022032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:06.661 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.661 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:06.661 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:06.661 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:06.661 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:06.661 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:06.661 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:06.661 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:06.661 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:06.661 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:06.661 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:06.661 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.661 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:06.661 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.661 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.661 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.661 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:06.661 "name": "Existed_Raid", 00:21:06.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.661 "strip_size_kb": 0, 00:21:06.661 "state": "configuring", 00:21:06.661 "raid_level": "raid1", 00:21:06.661 "superblock": false, 00:21:06.661 "num_base_bdevs": 3, 00:21:06.661 "num_base_bdevs_discovered": 2, 00:21:06.661 "num_base_bdevs_operational": 3, 00:21:06.661 "base_bdevs_list": [ 00:21:06.661 { 00:21:06.661 "name": null, 00:21:06.661 "uuid": "d34b4096-78d9-4571-9df7-6da76231caae", 00:21:06.661 "is_configured": false, 00:21:06.661 "data_offset": 0, 00:21:06.661 "data_size": 65536 00:21:06.661 }, 00:21:06.661 { 00:21:06.661 "name": "BaseBdev2", 00:21:06.661 "uuid": "b6afe122-26b2-424b-9717-934692ef9bed", 00:21:06.661 "is_configured": true, 00:21:06.661 "data_offset": 0, 00:21:06.661 "data_size": 65536 00:21:06.661 }, 00:21:06.661 { 00:21:06.661 "name": "BaseBdev3", 00:21:06.661 "uuid": "7dcfa645-3bb8-4b72-9ba2-58f59007c01e", 00:21:06.661 "is_configured": true, 00:21:06.661 "data_offset": 0, 00:21:06.661 "data_size": 65536 00:21:06.661 } 00:21:06.661 ] 00:21:06.661 }' 00:21:06.661 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:06.661 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.922 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.922 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.922 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:06.922 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.922 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.922 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:21:06.922 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.922 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:06.922 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.922 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.922 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.922 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d34b4096-78d9-4571-9df7-6da76231caae 00:21:06.922 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.922 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.922 [2024-12-05 12:52:49.436823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:06.922 [2024-12-05 12:52:49.436863] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:06.922 [2024-12-05 12:52:49.436869] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:06.922 [2024-12-05 12:52:49.437071] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:06.922 [2024-12-05 12:52:49.437188] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:06.922 [2024-12-05 12:52:49.437201] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:21:06.922 [2024-12-05 12:52:49.437384] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:06.922 NewBaseBdev 00:21:06.922 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.922 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:21:06.922 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:21:06.922 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:06.922 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:06.922 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:06.922 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:06.922 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:06.922 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.922 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.922 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.922 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:06.922 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.922 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.922 [ 00:21:06.922 { 00:21:06.922 "name": "NewBaseBdev", 00:21:06.922 "aliases": [ 00:21:06.922 "d34b4096-78d9-4571-9df7-6da76231caae" 00:21:06.922 ], 00:21:06.922 "product_name": "Malloc disk", 00:21:06.922 "block_size": 512, 00:21:06.922 "num_blocks": 65536, 00:21:06.922 "uuid": "d34b4096-78d9-4571-9df7-6da76231caae", 00:21:06.922 "assigned_rate_limits": { 00:21:06.922 "rw_ios_per_sec": 0, 00:21:06.922 "rw_mbytes_per_sec": 0, 00:21:06.922 "r_mbytes_per_sec": 0, 00:21:06.922 "w_mbytes_per_sec": 0 00:21:06.922 }, 00:21:06.922 "claimed": true, 00:21:06.922 "claim_type": "exclusive_write", 00:21:06.922 "zoned": false, 00:21:06.922 "supported_io_types": { 00:21:06.922 "read": true, 00:21:06.922 "write": true, 00:21:06.922 "unmap": true, 00:21:06.922 "flush": true, 00:21:06.922 "reset": true, 00:21:06.922 "nvme_admin": false, 00:21:06.922 "nvme_io": false, 00:21:06.922 "nvme_io_md": false, 00:21:06.922 "write_zeroes": true, 00:21:06.922 "zcopy": true, 00:21:06.922 "get_zone_info": false, 00:21:06.922 "zone_management": false, 00:21:06.922 "zone_append": false, 00:21:06.922 "compare": false, 00:21:06.922 "compare_and_write": false, 00:21:06.922 "abort": true, 00:21:06.922 "seek_hole": false, 00:21:06.923 "seek_data": false, 00:21:06.923 "copy": true, 00:21:06.923 "nvme_iov_md": false 00:21:06.923 }, 00:21:06.923 "memory_domains": [ 00:21:06.923 { 00:21:06.923 "dma_device_id": "system", 00:21:06.923 "dma_device_type": 1 00:21:06.923 }, 00:21:06.923 { 00:21:06.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:06.923 "dma_device_type": 2 00:21:06.923 } 00:21:06.923 ], 00:21:06.923 "driver_specific": {} 00:21:06.923 } 00:21:06.923 ] 00:21:06.923 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.923 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:06.923 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:21:06.923 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:06.923 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:06.923 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:06.923 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:06.923 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:06.923 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:06.923 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:06.923 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:06.923 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:06.923 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.923 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:06.923 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.923 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.923 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.923 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:06.923 "name": "Existed_Raid", 00:21:06.923 "uuid": "1c0983f8-0374-4be3-bd10-6f36a50f6823", 00:21:06.923 "strip_size_kb": 0, 00:21:06.923 "state": "online", 00:21:06.923 "raid_level": "raid1", 00:21:06.923 "superblock": false, 00:21:06.923 "num_base_bdevs": 3, 00:21:06.923 "num_base_bdevs_discovered": 3, 00:21:06.923 "num_base_bdevs_operational": 3, 00:21:06.923 "base_bdevs_list": [ 00:21:06.923 { 00:21:06.923 "name": "NewBaseBdev", 00:21:06.923 "uuid": "d34b4096-78d9-4571-9df7-6da76231caae", 00:21:06.923 "is_configured": true, 00:21:06.923 "data_offset": 0, 00:21:06.923 "data_size": 65536 00:21:06.923 }, 00:21:06.923 { 00:21:06.923 "name": "BaseBdev2", 00:21:06.923 "uuid": "b6afe122-26b2-424b-9717-934692ef9bed", 00:21:06.923 "is_configured": true, 00:21:06.923 "data_offset": 0, 00:21:06.923 "data_size": 65536 00:21:06.923 }, 00:21:06.923 { 00:21:06.923 "name": "BaseBdev3", 00:21:06.923 "uuid": "7dcfa645-3bb8-4b72-9ba2-58f59007c01e", 00:21:06.923 "is_configured": true, 00:21:06.923 "data_offset": 0, 00:21:06.923 "data_size": 65536 00:21:06.923 } 00:21:06.923 ] 00:21:06.923 }' 00:21:06.923 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:06.923 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.181 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:21:07.181 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:07.181 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:07.181 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:07.181 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:07.181 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:07.181 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:07.181 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:07.181 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.181 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.181 [2024-12-05 12:52:49.753190] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:07.439 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.439 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:07.439 "name": "Existed_Raid", 00:21:07.439 "aliases": [ 00:21:07.439 "1c0983f8-0374-4be3-bd10-6f36a50f6823" 00:21:07.439 ], 00:21:07.439 "product_name": "Raid Volume", 00:21:07.439 "block_size": 512, 00:21:07.439 "num_blocks": 65536, 00:21:07.439 "uuid": "1c0983f8-0374-4be3-bd10-6f36a50f6823", 00:21:07.439 "assigned_rate_limits": { 00:21:07.439 "rw_ios_per_sec": 0, 00:21:07.439 "rw_mbytes_per_sec": 0, 00:21:07.439 "r_mbytes_per_sec": 0, 00:21:07.439 "w_mbytes_per_sec": 0 00:21:07.439 }, 00:21:07.439 "claimed": false, 00:21:07.439 "zoned": false, 00:21:07.439 "supported_io_types": { 00:21:07.439 "read": true, 00:21:07.439 "write": true, 00:21:07.439 "unmap": false, 00:21:07.439 "flush": false, 00:21:07.439 "reset": true, 00:21:07.439 "nvme_admin": false, 00:21:07.439 "nvme_io": false, 00:21:07.439 "nvme_io_md": false, 00:21:07.439 "write_zeroes": true, 00:21:07.439 "zcopy": false, 00:21:07.439 "get_zone_info": false, 00:21:07.439 "zone_management": false, 00:21:07.439 "zone_append": false, 00:21:07.439 "compare": false, 00:21:07.439 "compare_and_write": false, 00:21:07.439 "abort": false, 00:21:07.439 "seek_hole": false, 00:21:07.439 "seek_data": false, 00:21:07.439 "copy": false, 00:21:07.439 "nvme_iov_md": false 00:21:07.439 }, 00:21:07.439 "memory_domains": [ 00:21:07.439 { 00:21:07.439 "dma_device_id": "system", 00:21:07.439 "dma_device_type": 1 00:21:07.439 }, 00:21:07.439 { 00:21:07.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:07.439 "dma_device_type": 2 00:21:07.439 }, 00:21:07.439 { 00:21:07.439 "dma_device_id": "system", 00:21:07.439 "dma_device_type": 1 00:21:07.439 }, 00:21:07.439 { 00:21:07.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:07.439 "dma_device_type": 2 00:21:07.439 }, 00:21:07.439 { 00:21:07.439 "dma_device_id": "system", 00:21:07.439 "dma_device_type": 1 00:21:07.439 }, 00:21:07.439 { 00:21:07.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:07.439 "dma_device_type": 2 00:21:07.439 } 00:21:07.439 ], 00:21:07.439 "driver_specific": { 00:21:07.439 "raid": { 00:21:07.439 "uuid": "1c0983f8-0374-4be3-bd10-6f36a50f6823", 00:21:07.439 "strip_size_kb": 0, 00:21:07.439 "state": "online", 00:21:07.439 "raid_level": "raid1", 00:21:07.439 "superblock": false, 00:21:07.439 "num_base_bdevs": 3, 00:21:07.439 "num_base_bdevs_discovered": 3, 00:21:07.439 "num_base_bdevs_operational": 3, 00:21:07.439 "base_bdevs_list": [ 00:21:07.439 { 00:21:07.439 "name": "NewBaseBdev", 00:21:07.439 "uuid": "d34b4096-78d9-4571-9df7-6da76231caae", 00:21:07.439 "is_configured": true, 00:21:07.439 "data_offset": 0, 00:21:07.439 "data_size": 65536 00:21:07.439 }, 00:21:07.439 { 00:21:07.439 "name": "BaseBdev2", 00:21:07.439 "uuid": "b6afe122-26b2-424b-9717-934692ef9bed", 00:21:07.439 "is_configured": true, 00:21:07.439 "data_offset": 0, 00:21:07.439 "data_size": 65536 00:21:07.439 }, 00:21:07.439 { 00:21:07.439 "name": "BaseBdev3", 00:21:07.439 "uuid": "7dcfa645-3bb8-4b72-9ba2-58f59007c01e", 00:21:07.439 "is_configured": true, 00:21:07.439 "data_offset": 0, 00:21:07.439 "data_size": 65536 00:21:07.439 } 00:21:07.439 ] 00:21:07.439 } 00:21:07.439 } 00:21:07.439 }' 00:21:07.439 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:07.439 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:21:07.439 BaseBdev2 00:21:07.439 BaseBdev3' 00:21:07.439 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:07.439 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:07.439 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:07.439 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:21:07.439 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:07.439 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.439 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.439 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.440 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:07.440 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:07.440 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:07.440 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:07.440 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:07.440 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.440 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.440 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.440 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:07.440 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:07.440 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:07.440 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:07.440 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.440 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.440 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:07.440 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.440 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:07.440 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:07.440 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:07.440 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.440 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.440 [2024-12-05 12:52:49.924949] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:07.440 [2024-12-05 12:52:49.924980] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:07.440 [2024-12-05 12:52:49.925038] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:07.440 [2024-12-05 12:52:49.925269] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:07.440 [2024-12-05 12:52:49.925277] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:21:07.440 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.440 12:52:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65632 00:21:07.440 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65632 ']' 00:21:07.440 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65632 00:21:07.440 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:21:07.440 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:07.440 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65632 00:21:07.440 killing process with pid 65632 00:21:07.440 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:07.440 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:07.440 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65632' 00:21:07.440 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65632 00:21:07.440 [2024-12-05 12:52:49.954042] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:07.440 12:52:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65632 00:21:07.697 [2024-12-05 12:52:50.104119] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:21:08.334 ************************************ 00:21:08.334 END TEST raid_state_function_test 00:21:08.334 ************************************ 00:21:08.334 00:21:08.334 real 0m7.694s 00:21:08.334 user 0m12.358s 00:21:08.334 sys 0m1.261s 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.334 12:52:50 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:21:08.334 12:52:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:08.334 12:52:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:08.334 12:52:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:08.334 ************************************ 00:21:08.334 START TEST raid_state_function_test_sb 00:21:08.334 ************************************ 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66225 00:21:08.334 Process raid pid: 66225 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66225' 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66225 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66225 ']' 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:08.334 12:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.334 [2024-12-05 12:52:50.791372] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:21:08.334 [2024-12-05 12:52:50.791503] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:08.595 [2024-12-05 12:52:50.951537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.595 [2024-12-05 12:52:51.057378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:08.855 [2024-12-05 12:52:51.197859] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:08.855 [2024-12-05 12:52:51.197916] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:09.129 12:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:09.129 12:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:21:09.129 12:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:09.129 12:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.129 12:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.129 [2024-12-05 12:52:51.566712] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:09.129 [2024-12-05 12:52:51.566765] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:09.129 [2024-12-05 12:52:51.566779] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:09.129 [2024-12-05 12:52:51.566790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:09.129 [2024-12-05 12:52:51.566796] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:09.129 [2024-12-05 12:52:51.566805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:09.129 12:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.129 12:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:09.129 12:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:09.129 12:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:09.129 12:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:09.129 12:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:09.129 12:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:09.129 12:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:09.129 12:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:09.129 12:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:09.129 12:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:09.129 12:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.129 12:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.129 12:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:09.129 12:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.129 12:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.129 12:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:09.129 "name": "Existed_Raid", 00:21:09.129 "uuid": "b2e3b1e1-5069-409f-8dd6-12f22d2840ff", 00:21:09.129 "strip_size_kb": 0, 00:21:09.129 "state": "configuring", 00:21:09.129 "raid_level": "raid1", 00:21:09.129 "superblock": true, 00:21:09.129 "num_base_bdevs": 3, 00:21:09.129 "num_base_bdevs_discovered": 0, 00:21:09.129 "num_base_bdevs_operational": 3, 00:21:09.129 "base_bdevs_list": [ 00:21:09.129 { 00:21:09.129 "name": "BaseBdev1", 00:21:09.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.129 "is_configured": false, 00:21:09.129 "data_offset": 0, 00:21:09.129 "data_size": 0 00:21:09.129 }, 00:21:09.129 { 00:21:09.129 "name": "BaseBdev2", 00:21:09.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.129 "is_configured": false, 00:21:09.129 "data_offset": 0, 00:21:09.129 "data_size": 0 00:21:09.129 }, 00:21:09.129 { 00:21:09.129 "name": "BaseBdev3", 00:21:09.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.129 "is_configured": false, 00:21:09.129 "data_offset": 0, 00:21:09.129 "data_size": 0 00:21:09.129 } 00:21:09.129 ] 00:21:09.129 }' 00:21:09.129 12:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:09.129 12:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.389 12:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:09.389 12:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.389 12:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.389 [2024-12-05 12:52:51.874728] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:09.389 [2024-12-05 12:52:51.874762] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:09.389 12:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.389 12:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:09.389 12:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.389 12:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.389 [2024-12-05 12:52:51.882741] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:09.389 [2024-12-05 12:52:51.882777] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:09.389 [2024-12-05 12:52:51.882785] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:09.389 [2024-12-05 12:52:51.882794] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:09.389 [2024-12-05 12:52:51.882800] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:09.389 [2024-12-05 12:52:51.882808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:09.389 12:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.389 12:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:09.389 12:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.389 12:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.389 [2024-12-05 12:52:51.915191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:09.389 BaseBdev1 00:21:09.389 12:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.389 12:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:09.390 12:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:09.390 12:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:09.390 12:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:09.390 12:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:09.390 12:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:09.390 12:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:09.390 12:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.390 12:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.390 12:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.390 12:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:09.390 12:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.390 12:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.390 [ 00:21:09.390 { 00:21:09.390 "name": "BaseBdev1", 00:21:09.390 "aliases": [ 00:21:09.390 "0b221fcd-db95-478c-b70c-ed32eb51ec94" 00:21:09.390 ], 00:21:09.390 "product_name": "Malloc disk", 00:21:09.390 "block_size": 512, 00:21:09.390 "num_blocks": 65536, 00:21:09.390 "uuid": "0b221fcd-db95-478c-b70c-ed32eb51ec94", 00:21:09.390 "assigned_rate_limits": { 00:21:09.390 "rw_ios_per_sec": 0, 00:21:09.390 "rw_mbytes_per_sec": 0, 00:21:09.390 "r_mbytes_per_sec": 0, 00:21:09.390 "w_mbytes_per_sec": 0 00:21:09.390 }, 00:21:09.390 "claimed": true, 00:21:09.390 "claim_type": "exclusive_write", 00:21:09.390 "zoned": false, 00:21:09.390 "supported_io_types": { 00:21:09.390 "read": true, 00:21:09.390 "write": true, 00:21:09.390 "unmap": true, 00:21:09.390 "flush": true, 00:21:09.390 "reset": true, 00:21:09.390 "nvme_admin": false, 00:21:09.390 "nvme_io": false, 00:21:09.390 "nvme_io_md": false, 00:21:09.390 "write_zeroes": true, 00:21:09.390 "zcopy": true, 00:21:09.390 "get_zone_info": false, 00:21:09.390 "zone_management": false, 00:21:09.390 "zone_append": false, 00:21:09.390 "compare": false, 00:21:09.390 "compare_and_write": false, 00:21:09.390 "abort": true, 00:21:09.390 "seek_hole": false, 00:21:09.390 "seek_data": false, 00:21:09.390 "copy": true, 00:21:09.390 "nvme_iov_md": false 00:21:09.390 }, 00:21:09.390 "memory_domains": [ 00:21:09.390 { 00:21:09.390 "dma_device_id": "system", 00:21:09.390 "dma_device_type": 1 00:21:09.390 }, 00:21:09.390 { 00:21:09.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:09.390 "dma_device_type": 2 00:21:09.390 } 00:21:09.390 ], 00:21:09.390 "driver_specific": {} 00:21:09.390 } 00:21:09.390 ] 00:21:09.390 12:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.390 12:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:09.390 12:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:09.390 12:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:09.390 12:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:09.390 12:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:09.390 12:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:09.390 12:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:09.390 12:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:09.390 12:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:09.390 12:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:09.390 12:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:09.390 12:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:09.390 12:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.390 12:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.390 12:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.390 12:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.390 12:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:09.390 "name": "Existed_Raid", 00:21:09.390 "uuid": "98ef23af-ef56-486b-a0c3-6437c793fc6f", 00:21:09.390 "strip_size_kb": 0, 00:21:09.390 "state": "configuring", 00:21:09.390 "raid_level": "raid1", 00:21:09.390 "superblock": true, 00:21:09.390 "num_base_bdevs": 3, 00:21:09.390 "num_base_bdevs_discovered": 1, 00:21:09.390 "num_base_bdevs_operational": 3, 00:21:09.390 "base_bdevs_list": [ 00:21:09.390 { 00:21:09.390 "name": "BaseBdev1", 00:21:09.390 "uuid": "0b221fcd-db95-478c-b70c-ed32eb51ec94", 00:21:09.390 "is_configured": true, 00:21:09.390 "data_offset": 2048, 00:21:09.390 "data_size": 63488 00:21:09.390 }, 00:21:09.390 { 00:21:09.390 "name": "BaseBdev2", 00:21:09.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.390 "is_configured": false, 00:21:09.390 "data_offset": 0, 00:21:09.390 "data_size": 0 00:21:09.390 }, 00:21:09.390 { 00:21:09.390 "name": "BaseBdev3", 00:21:09.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.390 "is_configured": false, 00:21:09.390 "data_offset": 0, 00:21:09.390 "data_size": 0 00:21:09.390 } 00:21:09.390 ] 00:21:09.390 }' 00:21:09.390 12:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:09.390 12:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.962 12:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:09.962 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.962 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.962 [2024-12-05 12:52:52.255313] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:09.963 [2024-12-05 12:52:52.255359] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:09.963 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.963 12:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:09.963 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.963 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.963 [2024-12-05 12:52:52.263366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:09.963 [2024-12-05 12:52:52.265232] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:09.963 [2024-12-05 12:52:52.265272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:09.963 [2024-12-05 12:52:52.265282] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:09.963 [2024-12-05 12:52:52.265290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:09.963 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.963 12:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:09.963 12:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:09.963 12:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:09.963 12:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:09.963 12:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:09.963 12:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:09.963 12:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:09.963 12:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:09.963 12:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:09.963 12:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:09.963 12:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:09.963 12:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:09.963 12:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.963 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.963 12:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:09.963 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.963 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.963 12:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:09.963 "name": "Existed_Raid", 00:21:09.963 "uuid": "14b71a9f-f3df-4236-a647-2d65d3c35776", 00:21:09.963 "strip_size_kb": 0, 00:21:09.963 "state": "configuring", 00:21:09.963 "raid_level": "raid1", 00:21:09.963 "superblock": true, 00:21:09.963 "num_base_bdevs": 3, 00:21:09.963 "num_base_bdevs_discovered": 1, 00:21:09.963 "num_base_bdevs_operational": 3, 00:21:09.963 "base_bdevs_list": [ 00:21:09.963 { 00:21:09.963 "name": "BaseBdev1", 00:21:09.963 "uuid": "0b221fcd-db95-478c-b70c-ed32eb51ec94", 00:21:09.963 "is_configured": true, 00:21:09.963 "data_offset": 2048, 00:21:09.963 "data_size": 63488 00:21:09.963 }, 00:21:09.963 { 00:21:09.963 "name": "BaseBdev2", 00:21:09.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.963 "is_configured": false, 00:21:09.963 "data_offset": 0, 00:21:09.963 "data_size": 0 00:21:09.963 }, 00:21:09.963 { 00:21:09.963 "name": "BaseBdev3", 00:21:09.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.963 "is_configured": false, 00:21:09.963 "data_offset": 0, 00:21:09.963 "data_size": 0 00:21:09.963 } 00:21:09.963 ] 00:21:09.963 }' 00:21:09.963 12:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:09.963 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.224 12:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:10.224 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.224 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.224 [2024-12-05 12:52:52.602033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:10.224 BaseBdev2 00:21:10.224 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.224 12:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:10.224 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:10.224 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:10.224 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:10.224 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:10.224 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:10.224 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:10.224 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.224 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.224 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.224 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:10.224 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.224 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.224 [ 00:21:10.224 { 00:21:10.224 "name": "BaseBdev2", 00:21:10.224 "aliases": [ 00:21:10.224 "bdbf0e6c-28e1-4bff-bf47-24194e3d120e" 00:21:10.224 ], 00:21:10.224 "product_name": "Malloc disk", 00:21:10.224 "block_size": 512, 00:21:10.224 "num_blocks": 65536, 00:21:10.224 "uuid": "bdbf0e6c-28e1-4bff-bf47-24194e3d120e", 00:21:10.224 "assigned_rate_limits": { 00:21:10.224 "rw_ios_per_sec": 0, 00:21:10.224 "rw_mbytes_per_sec": 0, 00:21:10.224 "r_mbytes_per_sec": 0, 00:21:10.224 "w_mbytes_per_sec": 0 00:21:10.224 }, 00:21:10.224 "claimed": true, 00:21:10.224 "claim_type": "exclusive_write", 00:21:10.224 "zoned": false, 00:21:10.224 "supported_io_types": { 00:21:10.224 "read": true, 00:21:10.224 "write": true, 00:21:10.224 "unmap": true, 00:21:10.224 "flush": true, 00:21:10.224 "reset": true, 00:21:10.224 "nvme_admin": false, 00:21:10.224 "nvme_io": false, 00:21:10.224 "nvme_io_md": false, 00:21:10.224 "write_zeroes": true, 00:21:10.224 "zcopy": true, 00:21:10.224 "get_zone_info": false, 00:21:10.224 "zone_management": false, 00:21:10.224 "zone_append": false, 00:21:10.224 "compare": false, 00:21:10.224 "compare_and_write": false, 00:21:10.224 "abort": true, 00:21:10.224 "seek_hole": false, 00:21:10.224 "seek_data": false, 00:21:10.224 "copy": true, 00:21:10.224 "nvme_iov_md": false 00:21:10.224 }, 00:21:10.224 "memory_domains": [ 00:21:10.224 { 00:21:10.224 "dma_device_id": "system", 00:21:10.224 "dma_device_type": 1 00:21:10.224 }, 00:21:10.224 { 00:21:10.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:10.224 "dma_device_type": 2 00:21:10.224 } 00:21:10.224 ], 00:21:10.224 "driver_specific": {} 00:21:10.224 } 00:21:10.224 ] 00:21:10.224 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.224 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:10.224 12:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:10.224 12:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:10.224 12:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:10.224 12:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:10.224 12:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:10.224 12:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:10.224 12:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:10.224 12:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:10.224 12:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:10.224 12:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:10.224 12:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:10.224 12:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:10.224 12:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:10.224 12:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.224 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.224 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.224 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.224 12:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:10.224 "name": "Existed_Raid", 00:21:10.224 "uuid": "14b71a9f-f3df-4236-a647-2d65d3c35776", 00:21:10.224 "strip_size_kb": 0, 00:21:10.224 "state": "configuring", 00:21:10.224 "raid_level": "raid1", 00:21:10.224 "superblock": true, 00:21:10.224 "num_base_bdevs": 3, 00:21:10.224 "num_base_bdevs_discovered": 2, 00:21:10.224 "num_base_bdevs_operational": 3, 00:21:10.225 "base_bdevs_list": [ 00:21:10.225 { 00:21:10.225 "name": "BaseBdev1", 00:21:10.225 "uuid": "0b221fcd-db95-478c-b70c-ed32eb51ec94", 00:21:10.225 "is_configured": true, 00:21:10.225 "data_offset": 2048, 00:21:10.225 "data_size": 63488 00:21:10.225 }, 00:21:10.225 { 00:21:10.225 "name": "BaseBdev2", 00:21:10.225 "uuid": "bdbf0e6c-28e1-4bff-bf47-24194e3d120e", 00:21:10.225 "is_configured": true, 00:21:10.225 "data_offset": 2048, 00:21:10.225 "data_size": 63488 00:21:10.225 }, 00:21:10.225 { 00:21:10.225 "name": "BaseBdev3", 00:21:10.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.225 "is_configured": false, 00:21:10.225 "data_offset": 0, 00:21:10.225 "data_size": 0 00:21:10.225 } 00:21:10.225 ] 00:21:10.225 }' 00:21:10.225 12:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:10.225 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.487 12:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:10.487 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.487 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.487 [2024-12-05 12:52:52.978267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:10.487 [2024-12-05 12:52:52.978511] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:10.487 [2024-12-05 12:52:52.978531] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:10.487 [2024-12-05 12:52:52.978796] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:10.487 BaseBdev3 00:21:10.487 [2024-12-05 12:52:52.978941] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:10.487 [2024-12-05 12:52:52.978950] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:10.487 [2024-12-05 12:52:52.979081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:10.487 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.487 12:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:21:10.487 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:10.487 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:10.487 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:10.487 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:10.487 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:10.487 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:10.487 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.487 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.487 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.487 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:10.487 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.487 12:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.487 [ 00:21:10.487 { 00:21:10.487 "name": "BaseBdev3", 00:21:10.487 "aliases": [ 00:21:10.487 "a3e599f0-7dd8-4420-bb42-162f9c45cfda" 00:21:10.487 ], 00:21:10.487 "product_name": "Malloc disk", 00:21:10.487 "block_size": 512, 00:21:10.487 "num_blocks": 65536, 00:21:10.487 "uuid": "a3e599f0-7dd8-4420-bb42-162f9c45cfda", 00:21:10.487 "assigned_rate_limits": { 00:21:10.487 "rw_ios_per_sec": 0, 00:21:10.487 "rw_mbytes_per_sec": 0, 00:21:10.487 "r_mbytes_per_sec": 0, 00:21:10.487 "w_mbytes_per_sec": 0 00:21:10.487 }, 00:21:10.487 "claimed": true, 00:21:10.487 "claim_type": "exclusive_write", 00:21:10.487 "zoned": false, 00:21:10.487 "supported_io_types": { 00:21:10.487 "read": true, 00:21:10.487 "write": true, 00:21:10.487 "unmap": true, 00:21:10.487 "flush": true, 00:21:10.487 "reset": true, 00:21:10.487 "nvme_admin": false, 00:21:10.487 "nvme_io": false, 00:21:10.487 "nvme_io_md": false, 00:21:10.487 "write_zeroes": true, 00:21:10.487 "zcopy": true, 00:21:10.487 "get_zone_info": false, 00:21:10.487 "zone_management": false, 00:21:10.487 "zone_append": false, 00:21:10.487 "compare": false, 00:21:10.487 "compare_and_write": false, 00:21:10.487 "abort": true, 00:21:10.487 "seek_hole": false, 00:21:10.487 "seek_data": false, 00:21:10.487 "copy": true, 00:21:10.487 "nvme_iov_md": false 00:21:10.487 }, 00:21:10.487 "memory_domains": [ 00:21:10.487 { 00:21:10.487 "dma_device_id": "system", 00:21:10.487 "dma_device_type": 1 00:21:10.487 }, 00:21:10.487 { 00:21:10.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:10.487 "dma_device_type": 2 00:21:10.487 } 00:21:10.487 ], 00:21:10.487 "driver_specific": {} 00:21:10.487 } 00:21:10.487 ] 00:21:10.487 12:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.487 12:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:10.487 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:10.487 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:10.487 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:21:10.487 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:10.487 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:10.487 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:10.487 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:10.487 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:10.487 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:10.487 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:10.487 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:10.487 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:10.487 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.487 12:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.487 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:10.487 12:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.487 12:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.487 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:10.487 "name": "Existed_Raid", 00:21:10.487 "uuid": "14b71a9f-f3df-4236-a647-2d65d3c35776", 00:21:10.487 "strip_size_kb": 0, 00:21:10.487 "state": "online", 00:21:10.487 "raid_level": "raid1", 00:21:10.487 "superblock": true, 00:21:10.487 "num_base_bdevs": 3, 00:21:10.487 "num_base_bdevs_discovered": 3, 00:21:10.487 "num_base_bdevs_operational": 3, 00:21:10.487 "base_bdevs_list": [ 00:21:10.487 { 00:21:10.487 "name": "BaseBdev1", 00:21:10.487 "uuid": "0b221fcd-db95-478c-b70c-ed32eb51ec94", 00:21:10.487 "is_configured": true, 00:21:10.487 "data_offset": 2048, 00:21:10.487 "data_size": 63488 00:21:10.487 }, 00:21:10.487 { 00:21:10.487 "name": "BaseBdev2", 00:21:10.487 "uuid": "bdbf0e6c-28e1-4bff-bf47-24194e3d120e", 00:21:10.487 "is_configured": true, 00:21:10.487 "data_offset": 2048, 00:21:10.487 "data_size": 63488 00:21:10.487 }, 00:21:10.487 { 00:21:10.487 "name": "BaseBdev3", 00:21:10.487 "uuid": "a3e599f0-7dd8-4420-bb42-162f9c45cfda", 00:21:10.487 "is_configured": true, 00:21:10.487 "data_offset": 2048, 00:21:10.487 "data_size": 63488 00:21:10.487 } 00:21:10.487 ] 00:21:10.487 }' 00:21:10.487 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:10.487 12:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.747 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:10.747 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:10.747 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:10.748 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:10.748 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:21:10.748 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:10.748 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:10.748 12:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.748 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:10.748 12:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.748 [2024-12-05 12:52:53.330751] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:11.007 12:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.007 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:11.007 "name": "Existed_Raid", 00:21:11.007 "aliases": [ 00:21:11.007 "14b71a9f-f3df-4236-a647-2d65d3c35776" 00:21:11.007 ], 00:21:11.007 "product_name": "Raid Volume", 00:21:11.007 "block_size": 512, 00:21:11.007 "num_blocks": 63488, 00:21:11.007 "uuid": "14b71a9f-f3df-4236-a647-2d65d3c35776", 00:21:11.007 "assigned_rate_limits": { 00:21:11.007 "rw_ios_per_sec": 0, 00:21:11.007 "rw_mbytes_per_sec": 0, 00:21:11.007 "r_mbytes_per_sec": 0, 00:21:11.007 "w_mbytes_per_sec": 0 00:21:11.007 }, 00:21:11.007 "claimed": false, 00:21:11.007 "zoned": false, 00:21:11.007 "supported_io_types": { 00:21:11.007 "read": true, 00:21:11.007 "write": true, 00:21:11.007 "unmap": false, 00:21:11.007 "flush": false, 00:21:11.007 "reset": true, 00:21:11.007 "nvme_admin": false, 00:21:11.007 "nvme_io": false, 00:21:11.007 "nvme_io_md": false, 00:21:11.007 "write_zeroes": true, 00:21:11.007 "zcopy": false, 00:21:11.007 "get_zone_info": false, 00:21:11.007 "zone_management": false, 00:21:11.007 "zone_append": false, 00:21:11.007 "compare": false, 00:21:11.007 "compare_and_write": false, 00:21:11.007 "abort": false, 00:21:11.007 "seek_hole": false, 00:21:11.007 "seek_data": false, 00:21:11.007 "copy": false, 00:21:11.007 "nvme_iov_md": false 00:21:11.007 }, 00:21:11.007 "memory_domains": [ 00:21:11.007 { 00:21:11.007 "dma_device_id": "system", 00:21:11.007 "dma_device_type": 1 00:21:11.007 }, 00:21:11.007 { 00:21:11.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.007 "dma_device_type": 2 00:21:11.007 }, 00:21:11.007 { 00:21:11.007 "dma_device_id": "system", 00:21:11.007 "dma_device_type": 1 00:21:11.007 }, 00:21:11.007 { 00:21:11.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.007 "dma_device_type": 2 00:21:11.007 }, 00:21:11.007 { 00:21:11.007 "dma_device_id": "system", 00:21:11.007 "dma_device_type": 1 00:21:11.007 }, 00:21:11.007 { 00:21:11.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.007 "dma_device_type": 2 00:21:11.007 } 00:21:11.007 ], 00:21:11.007 "driver_specific": { 00:21:11.007 "raid": { 00:21:11.007 "uuid": "14b71a9f-f3df-4236-a647-2d65d3c35776", 00:21:11.007 "strip_size_kb": 0, 00:21:11.007 "state": "online", 00:21:11.007 "raid_level": "raid1", 00:21:11.007 "superblock": true, 00:21:11.007 "num_base_bdevs": 3, 00:21:11.007 "num_base_bdevs_discovered": 3, 00:21:11.007 "num_base_bdevs_operational": 3, 00:21:11.007 "base_bdevs_list": [ 00:21:11.007 { 00:21:11.007 "name": "BaseBdev1", 00:21:11.007 "uuid": "0b221fcd-db95-478c-b70c-ed32eb51ec94", 00:21:11.007 "is_configured": true, 00:21:11.007 "data_offset": 2048, 00:21:11.007 "data_size": 63488 00:21:11.007 }, 00:21:11.007 { 00:21:11.007 "name": "BaseBdev2", 00:21:11.007 "uuid": "bdbf0e6c-28e1-4bff-bf47-24194e3d120e", 00:21:11.007 "is_configured": true, 00:21:11.007 "data_offset": 2048, 00:21:11.007 "data_size": 63488 00:21:11.007 }, 00:21:11.007 { 00:21:11.007 "name": "BaseBdev3", 00:21:11.007 "uuid": "a3e599f0-7dd8-4420-bb42-162f9c45cfda", 00:21:11.007 "is_configured": true, 00:21:11.007 "data_offset": 2048, 00:21:11.007 "data_size": 63488 00:21:11.007 } 00:21:11.007 ] 00:21:11.007 } 00:21:11.007 } 00:21:11.007 }' 00:21:11.007 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:11.008 BaseBdev2 00:21:11.008 BaseBdev3' 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.008 [2024-12-05 12:52:53.510501] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.008 12:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.268 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:11.268 "name": "Existed_Raid", 00:21:11.268 "uuid": "14b71a9f-f3df-4236-a647-2d65d3c35776", 00:21:11.268 "strip_size_kb": 0, 00:21:11.268 "state": "online", 00:21:11.268 "raid_level": "raid1", 00:21:11.268 "superblock": true, 00:21:11.268 "num_base_bdevs": 3, 00:21:11.268 "num_base_bdevs_discovered": 2, 00:21:11.268 "num_base_bdevs_operational": 2, 00:21:11.268 "base_bdevs_list": [ 00:21:11.268 { 00:21:11.268 "name": null, 00:21:11.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.268 "is_configured": false, 00:21:11.268 "data_offset": 0, 00:21:11.268 "data_size": 63488 00:21:11.268 }, 00:21:11.268 { 00:21:11.268 "name": "BaseBdev2", 00:21:11.268 "uuid": "bdbf0e6c-28e1-4bff-bf47-24194e3d120e", 00:21:11.268 "is_configured": true, 00:21:11.268 "data_offset": 2048, 00:21:11.268 "data_size": 63488 00:21:11.268 }, 00:21:11.268 { 00:21:11.268 "name": "BaseBdev3", 00:21:11.268 "uuid": "a3e599f0-7dd8-4420-bb42-162f9c45cfda", 00:21:11.268 "is_configured": true, 00:21:11.268 "data_offset": 2048, 00:21:11.268 "data_size": 63488 00:21:11.268 } 00:21:11.268 ] 00:21:11.268 }' 00:21:11.268 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:11.268 12:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.529 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:11.529 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:11.529 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:11.530 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.530 12:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.530 12:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.530 12:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.530 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:11.530 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:11.530 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:11.530 12:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.530 12:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.530 [2024-12-05 12:52:53.902615] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:11.530 12:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.530 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:11.530 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:11.530 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.530 12:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.530 12:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.530 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:11.530 12:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.530 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:11.530 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:11.530 12:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:21:11.530 12:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.530 12:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.530 [2024-12-05 12:52:53.997411] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:11.530 [2024-12-05 12:52:53.997517] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:11.530 [2024-12-05 12:52:54.057783] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:11.530 [2024-12-05 12:52:54.057836] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:11.530 [2024-12-05 12:52:54.057847] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:11.530 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.530 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:11.530 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:11.530 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.530 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.530 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.530 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:11.530 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.530 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:11.530 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:11.530 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:21:11.530 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:21:11.530 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:11.530 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:11.530 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.530 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.792 BaseBdev2 00:21:11.792 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.792 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:21:11.792 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:11.792 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:11.792 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:11.792 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:11.792 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:11.792 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:11.792 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.792 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.792 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.792 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:11.792 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.792 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.792 [ 00:21:11.792 { 00:21:11.792 "name": "BaseBdev2", 00:21:11.792 "aliases": [ 00:21:11.792 "681b321e-db22-4456-9eb9-1701a7e3b797" 00:21:11.792 ], 00:21:11.792 "product_name": "Malloc disk", 00:21:11.792 "block_size": 512, 00:21:11.792 "num_blocks": 65536, 00:21:11.792 "uuid": "681b321e-db22-4456-9eb9-1701a7e3b797", 00:21:11.792 "assigned_rate_limits": { 00:21:11.792 "rw_ios_per_sec": 0, 00:21:11.792 "rw_mbytes_per_sec": 0, 00:21:11.792 "r_mbytes_per_sec": 0, 00:21:11.792 "w_mbytes_per_sec": 0 00:21:11.792 }, 00:21:11.792 "claimed": false, 00:21:11.792 "zoned": false, 00:21:11.792 "supported_io_types": { 00:21:11.792 "read": true, 00:21:11.792 "write": true, 00:21:11.792 "unmap": true, 00:21:11.792 "flush": true, 00:21:11.792 "reset": true, 00:21:11.792 "nvme_admin": false, 00:21:11.792 "nvme_io": false, 00:21:11.792 "nvme_io_md": false, 00:21:11.792 "write_zeroes": true, 00:21:11.792 "zcopy": true, 00:21:11.792 "get_zone_info": false, 00:21:11.792 "zone_management": false, 00:21:11.792 "zone_append": false, 00:21:11.792 "compare": false, 00:21:11.792 "compare_and_write": false, 00:21:11.792 "abort": true, 00:21:11.792 "seek_hole": false, 00:21:11.792 "seek_data": false, 00:21:11.792 "copy": true, 00:21:11.792 "nvme_iov_md": false 00:21:11.792 }, 00:21:11.792 "memory_domains": [ 00:21:11.792 { 00:21:11.792 "dma_device_id": "system", 00:21:11.792 "dma_device_type": 1 00:21:11.792 }, 00:21:11.792 { 00:21:11.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.792 "dma_device_type": 2 00:21:11.792 } 00:21:11.792 ], 00:21:11.792 "driver_specific": {} 00:21:11.792 } 00:21:11.792 ] 00:21:11.792 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.792 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:11.792 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:11.792 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:11.792 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:11.792 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.792 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.792 BaseBdev3 00:21:11.792 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.792 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:21:11.792 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:11.792 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:11.792 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:11.792 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:11.792 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:11.792 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:11.792 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.793 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.793 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.793 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:11.793 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.793 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.793 [ 00:21:11.793 { 00:21:11.793 "name": "BaseBdev3", 00:21:11.793 "aliases": [ 00:21:11.793 "7d0b495b-71d9-4787-a8e7-b2ca2d7c801b" 00:21:11.793 ], 00:21:11.793 "product_name": "Malloc disk", 00:21:11.793 "block_size": 512, 00:21:11.793 "num_blocks": 65536, 00:21:11.793 "uuid": "7d0b495b-71d9-4787-a8e7-b2ca2d7c801b", 00:21:11.793 "assigned_rate_limits": { 00:21:11.793 "rw_ios_per_sec": 0, 00:21:11.793 "rw_mbytes_per_sec": 0, 00:21:11.793 "r_mbytes_per_sec": 0, 00:21:11.793 "w_mbytes_per_sec": 0 00:21:11.793 }, 00:21:11.793 "claimed": false, 00:21:11.793 "zoned": false, 00:21:11.793 "supported_io_types": { 00:21:11.793 "read": true, 00:21:11.793 "write": true, 00:21:11.793 "unmap": true, 00:21:11.793 "flush": true, 00:21:11.793 "reset": true, 00:21:11.793 "nvme_admin": false, 00:21:11.793 "nvme_io": false, 00:21:11.793 "nvme_io_md": false, 00:21:11.793 "write_zeroes": true, 00:21:11.793 "zcopy": true, 00:21:11.793 "get_zone_info": false, 00:21:11.793 "zone_management": false, 00:21:11.793 "zone_append": false, 00:21:11.793 "compare": false, 00:21:11.793 "compare_and_write": false, 00:21:11.793 "abort": true, 00:21:11.793 "seek_hole": false, 00:21:11.793 "seek_data": false, 00:21:11.793 "copy": true, 00:21:11.793 "nvme_iov_md": false 00:21:11.793 }, 00:21:11.793 "memory_domains": [ 00:21:11.793 { 00:21:11.793 "dma_device_id": "system", 00:21:11.793 "dma_device_type": 1 00:21:11.793 }, 00:21:11.793 { 00:21:11.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.793 "dma_device_type": 2 00:21:11.793 } 00:21:11.793 ], 00:21:11.793 "driver_specific": {} 00:21:11.793 } 00:21:11.793 ] 00:21:11.793 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.793 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:11.793 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:11.793 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:11.793 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:11.793 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.793 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.793 [2024-12-05 12:52:54.186534] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:11.793 [2024-12-05 12:52:54.186576] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:11.793 [2024-12-05 12:52:54.186593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:11.793 [2024-12-05 12:52:54.188406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:11.793 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.793 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:11.793 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:11.793 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:11.793 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:11.793 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:11.793 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:11.793 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:11.793 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:11.793 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:11.793 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:11.793 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:11.793 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.793 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.793 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.793 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.793 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:11.793 "name": "Existed_Raid", 00:21:11.793 "uuid": "670f3b4c-dc43-4d2d-93e0-d07fa9466de3", 00:21:11.793 "strip_size_kb": 0, 00:21:11.793 "state": "configuring", 00:21:11.793 "raid_level": "raid1", 00:21:11.793 "superblock": true, 00:21:11.793 "num_base_bdevs": 3, 00:21:11.793 "num_base_bdevs_discovered": 2, 00:21:11.793 "num_base_bdevs_operational": 3, 00:21:11.793 "base_bdevs_list": [ 00:21:11.793 { 00:21:11.793 "name": "BaseBdev1", 00:21:11.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.793 "is_configured": false, 00:21:11.793 "data_offset": 0, 00:21:11.793 "data_size": 0 00:21:11.793 }, 00:21:11.793 { 00:21:11.793 "name": "BaseBdev2", 00:21:11.793 "uuid": "681b321e-db22-4456-9eb9-1701a7e3b797", 00:21:11.793 "is_configured": true, 00:21:11.793 "data_offset": 2048, 00:21:11.793 "data_size": 63488 00:21:11.793 }, 00:21:11.793 { 00:21:11.793 "name": "BaseBdev3", 00:21:11.793 "uuid": "7d0b495b-71d9-4787-a8e7-b2ca2d7c801b", 00:21:11.793 "is_configured": true, 00:21:11.793 "data_offset": 2048, 00:21:11.793 "data_size": 63488 00:21:11.793 } 00:21:11.793 ] 00:21:11.793 }' 00:21:11.793 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:11.793 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.054 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:12.054 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.054 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.054 [2024-12-05 12:52:54.510615] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:12.054 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.054 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:12.054 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:12.054 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:12.054 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:12.054 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:12.054 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:12.054 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:12.054 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:12.054 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:12.054 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:12.054 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.054 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.054 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:12.054 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.054 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.054 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:12.054 "name": "Existed_Raid", 00:21:12.054 "uuid": "670f3b4c-dc43-4d2d-93e0-d07fa9466de3", 00:21:12.054 "strip_size_kb": 0, 00:21:12.054 "state": "configuring", 00:21:12.054 "raid_level": "raid1", 00:21:12.054 "superblock": true, 00:21:12.054 "num_base_bdevs": 3, 00:21:12.054 "num_base_bdevs_discovered": 1, 00:21:12.054 "num_base_bdevs_operational": 3, 00:21:12.054 "base_bdevs_list": [ 00:21:12.054 { 00:21:12.054 "name": "BaseBdev1", 00:21:12.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.054 "is_configured": false, 00:21:12.054 "data_offset": 0, 00:21:12.054 "data_size": 0 00:21:12.054 }, 00:21:12.054 { 00:21:12.054 "name": null, 00:21:12.054 "uuid": "681b321e-db22-4456-9eb9-1701a7e3b797", 00:21:12.054 "is_configured": false, 00:21:12.054 "data_offset": 0, 00:21:12.054 "data_size": 63488 00:21:12.054 }, 00:21:12.054 { 00:21:12.054 "name": "BaseBdev3", 00:21:12.054 "uuid": "7d0b495b-71d9-4787-a8e7-b2ca2d7c801b", 00:21:12.054 "is_configured": true, 00:21:12.054 "data_offset": 2048, 00:21:12.054 "data_size": 63488 00:21:12.054 } 00:21:12.054 ] 00:21:12.054 }' 00:21:12.054 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:12.054 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.313 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.313 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:12.313 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.313 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.313 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.313 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:21:12.313 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:12.313 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.313 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.313 [2024-12-05 12:52:54.886142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:12.313 BaseBdev1 00:21:12.313 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.313 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:21:12.313 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:12.313 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:12.313 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:12.313 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:12.313 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:12.313 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:12.313 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.313 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.313 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.313 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:12.313 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.313 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.573 [ 00:21:12.573 { 00:21:12.573 "name": "BaseBdev1", 00:21:12.573 "aliases": [ 00:21:12.573 "58284348-e071-4e97-a8b5-8c0796b17798" 00:21:12.573 ], 00:21:12.573 "product_name": "Malloc disk", 00:21:12.573 "block_size": 512, 00:21:12.573 "num_blocks": 65536, 00:21:12.573 "uuid": "58284348-e071-4e97-a8b5-8c0796b17798", 00:21:12.573 "assigned_rate_limits": { 00:21:12.573 "rw_ios_per_sec": 0, 00:21:12.573 "rw_mbytes_per_sec": 0, 00:21:12.573 "r_mbytes_per_sec": 0, 00:21:12.573 "w_mbytes_per_sec": 0 00:21:12.573 }, 00:21:12.573 "claimed": true, 00:21:12.573 "claim_type": "exclusive_write", 00:21:12.573 "zoned": false, 00:21:12.573 "supported_io_types": { 00:21:12.573 "read": true, 00:21:12.573 "write": true, 00:21:12.573 "unmap": true, 00:21:12.573 "flush": true, 00:21:12.573 "reset": true, 00:21:12.573 "nvme_admin": false, 00:21:12.573 "nvme_io": false, 00:21:12.573 "nvme_io_md": false, 00:21:12.573 "write_zeroes": true, 00:21:12.573 "zcopy": true, 00:21:12.573 "get_zone_info": false, 00:21:12.573 "zone_management": false, 00:21:12.573 "zone_append": false, 00:21:12.573 "compare": false, 00:21:12.573 "compare_and_write": false, 00:21:12.573 "abort": true, 00:21:12.573 "seek_hole": false, 00:21:12.573 "seek_data": false, 00:21:12.573 "copy": true, 00:21:12.573 "nvme_iov_md": false 00:21:12.573 }, 00:21:12.573 "memory_domains": [ 00:21:12.573 { 00:21:12.573 "dma_device_id": "system", 00:21:12.573 "dma_device_type": 1 00:21:12.573 }, 00:21:12.573 { 00:21:12.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.573 "dma_device_type": 2 00:21:12.573 } 00:21:12.573 ], 00:21:12.573 "driver_specific": {} 00:21:12.573 } 00:21:12.573 ] 00:21:12.573 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.573 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:12.573 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:12.573 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:12.573 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:12.573 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:12.573 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:12.573 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:12.573 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:12.573 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:12.573 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:12.573 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:12.573 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:12.573 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.573 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.573 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.573 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.573 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:12.573 "name": "Existed_Raid", 00:21:12.573 "uuid": "670f3b4c-dc43-4d2d-93e0-d07fa9466de3", 00:21:12.573 "strip_size_kb": 0, 00:21:12.573 "state": "configuring", 00:21:12.573 "raid_level": "raid1", 00:21:12.573 "superblock": true, 00:21:12.573 "num_base_bdevs": 3, 00:21:12.573 "num_base_bdevs_discovered": 2, 00:21:12.573 "num_base_bdevs_operational": 3, 00:21:12.573 "base_bdevs_list": [ 00:21:12.573 { 00:21:12.573 "name": "BaseBdev1", 00:21:12.573 "uuid": "58284348-e071-4e97-a8b5-8c0796b17798", 00:21:12.573 "is_configured": true, 00:21:12.573 "data_offset": 2048, 00:21:12.573 "data_size": 63488 00:21:12.573 }, 00:21:12.573 { 00:21:12.573 "name": null, 00:21:12.573 "uuid": "681b321e-db22-4456-9eb9-1701a7e3b797", 00:21:12.573 "is_configured": false, 00:21:12.573 "data_offset": 0, 00:21:12.573 "data_size": 63488 00:21:12.573 }, 00:21:12.573 { 00:21:12.573 "name": "BaseBdev3", 00:21:12.573 "uuid": "7d0b495b-71d9-4787-a8e7-b2ca2d7c801b", 00:21:12.573 "is_configured": true, 00:21:12.573 "data_offset": 2048, 00:21:12.573 "data_size": 63488 00:21:12.573 } 00:21:12.573 ] 00:21:12.573 }' 00:21:12.573 12:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:12.573 12:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.832 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:12.832 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.832 12:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.832 12:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.832 12:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.832 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:21:12.832 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:21:12.832 12:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.832 12:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.832 [2024-12-05 12:52:55.238296] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:12.832 12:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.832 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:12.832 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:12.832 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:12.832 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:12.832 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:12.832 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:12.832 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:12.832 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:12.832 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:12.832 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:12.832 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.832 12:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.832 12:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.832 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:12.832 12:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.832 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:12.832 "name": "Existed_Raid", 00:21:12.832 "uuid": "670f3b4c-dc43-4d2d-93e0-d07fa9466de3", 00:21:12.832 "strip_size_kb": 0, 00:21:12.832 "state": "configuring", 00:21:12.832 "raid_level": "raid1", 00:21:12.832 "superblock": true, 00:21:12.832 "num_base_bdevs": 3, 00:21:12.832 "num_base_bdevs_discovered": 1, 00:21:12.832 "num_base_bdevs_operational": 3, 00:21:12.832 "base_bdevs_list": [ 00:21:12.832 { 00:21:12.832 "name": "BaseBdev1", 00:21:12.832 "uuid": "58284348-e071-4e97-a8b5-8c0796b17798", 00:21:12.832 "is_configured": true, 00:21:12.832 "data_offset": 2048, 00:21:12.832 "data_size": 63488 00:21:12.833 }, 00:21:12.833 { 00:21:12.833 "name": null, 00:21:12.833 "uuid": "681b321e-db22-4456-9eb9-1701a7e3b797", 00:21:12.833 "is_configured": false, 00:21:12.833 "data_offset": 0, 00:21:12.833 "data_size": 63488 00:21:12.833 }, 00:21:12.833 { 00:21:12.833 "name": null, 00:21:12.833 "uuid": "7d0b495b-71d9-4787-a8e7-b2ca2d7c801b", 00:21:12.833 "is_configured": false, 00:21:12.833 "data_offset": 0, 00:21:12.833 "data_size": 63488 00:21:12.833 } 00:21:12.833 ] 00:21:12.833 }' 00:21:12.833 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:12.833 12:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.092 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.093 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:13.093 12:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.093 12:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.093 12:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.093 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:21:13.093 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:13.093 12:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.093 12:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.093 [2024-12-05 12:52:55.574386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:13.093 12:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.093 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:13.093 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:13.093 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:13.093 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:13.093 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:13.093 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:13.093 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:13.093 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:13.093 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:13.093 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:13.093 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.093 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:13.093 12:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.093 12:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.093 12:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.093 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:13.093 "name": "Existed_Raid", 00:21:13.093 "uuid": "670f3b4c-dc43-4d2d-93e0-d07fa9466de3", 00:21:13.093 "strip_size_kb": 0, 00:21:13.093 "state": "configuring", 00:21:13.093 "raid_level": "raid1", 00:21:13.093 "superblock": true, 00:21:13.093 "num_base_bdevs": 3, 00:21:13.093 "num_base_bdevs_discovered": 2, 00:21:13.093 "num_base_bdevs_operational": 3, 00:21:13.093 "base_bdevs_list": [ 00:21:13.093 { 00:21:13.093 "name": "BaseBdev1", 00:21:13.093 "uuid": "58284348-e071-4e97-a8b5-8c0796b17798", 00:21:13.093 "is_configured": true, 00:21:13.093 "data_offset": 2048, 00:21:13.093 "data_size": 63488 00:21:13.093 }, 00:21:13.093 { 00:21:13.093 "name": null, 00:21:13.093 "uuid": "681b321e-db22-4456-9eb9-1701a7e3b797", 00:21:13.093 "is_configured": false, 00:21:13.093 "data_offset": 0, 00:21:13.093 "data_size": 63488 00:21:13.093 }, 00:21:13.093 { 00:21:13.093 "name": "BaseBdev3", 00:21:13.093 "uuid": "7d0b495b-71d9-4787-a8e7-b2ca2d7c801b", 00:21:13.093 "is_configured": true, 00:21:13.093 "data_offset": 2048, 00:21:13.093 "data_size": 63488 00:21:13.093 } 00:21:13.093 ] 00:21:13.093 }' 00:21:13.093 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:13.093 12:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.351 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.351 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:13.351 12:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.351 12:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.351 12:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.351 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:21:13.351 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:13.351 12:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.351 12:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.351 [2024-12-05 12:52:55.934526] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:13.658 12:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.658 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:13.658 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:13.658 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:13.658 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:13.658 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:13.658 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:13.658 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:13.658 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:13.658 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:13.658 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:13.658 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.658 12:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.658 12:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.658 12:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:13.658 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.658 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:13.658 "name": "Existed_Raid", 00:21:13.658 "uuid": "670f3b4c-dc43-4d2d-93e0-d07fa9466de3", 00:21:13.658 "strip_size_kb": 0, 00:21:13.658 "state": "configuring", 00:21:13.658 "raid_level": "raid1", 00:21:13.658 "superblock": true, 00:21:13.658 "num_base_bdevs": 3, 00:21:13.658 "num_base_bdevs_discovered": 1, 00:21:13.658 "num_base_bdevs_operational": 3, 00:21:13.658 "base_bdevs_list": [ 00:21:13.658 { 00:21:13.658 "name": null, 00:21:13.658 "uuid": "58284348-e071-4e97-a8b5-8c0796b17798", 00:21:13.658 "is_configured": false, 00:21:13.658 "data_offset": 0, 00:21:13.658 "data_size": 63488 00:21:13.658 }, 00:21:13.658 { 00:21:13.658 "name": null, 00:21:13.658 "uuid": "681b321e-db22-4456-9eb9-1701a7e3b797", 00:21:13.658 "is_configured": false, 00:21:13.658 "data_offset": 0, 00:21:13.658 "data_size": 63488 00:21:13.658 }, 00:21:13.658 { 00:21:13.658 "name": "BaseBdev3", 00:21:13.658 "uuid": "7d0b495b-71d9-4787-a8e7-b2ca2d7c801b", 00:21:13.658 "is_configured": true, 00:21:13.658 "data_offset": 2048, 00:21:13.658 "data_size": 63488 00:21:13.658 } 00:21:13.658 ] 00:21:13.658 }' 00:21:13.658 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:13.658 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.916 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.916 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:13.916 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.916 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.916 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.916 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:21:13.916 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:13.916 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.916 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.916 [2024-12-05 12:52:56.354731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:13.916 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.916 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:13.916 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:13.916 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:13.916 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:13.916 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:13.916 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:13.916 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:13.916 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:13.916 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:13.916 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:13.916 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.916 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.916 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.917 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:13.917 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.917 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:13.917 "name": "Existed_Raid", 00:21:13.917 "uuid": "670f3b4c-dc43-4d2d-93e0-d07fa9466de3", 00:21:13.917 "strip_size_kb": 0, 00:21:13.917 "state": "configuring", 00:21:13.917 "raid_level": "raid1", 00:21:13.917 "superblock": true, 00:21:13.917 "num_base_bdevs": 3, 00:21:13.917 "num_base_bdevs_discovered": 2, 00:21:13.917 "num_base_bdevs_operational": 3, 00:21:13.917 "base_bdevs_list": [ 00:21:13.917 { 00:21:13.917 "name": null, 00:21:13.917 "uuid": "58284348-e071-4e97-a8b5-8c0796b17798", 00:21:13.917 "is_configured": false, 00:21:13.917 "data_offset": 0, 00:21:13.917 "data_size": 63488 00:21:13.917 }, 00:21:13.917 { 00:21:13.917 "name": "BaseBdev2", 00:21:13.917 "uuid": "681b321e-db22-4456-9eb9-1701a7e3b797", 00:21:13.917 "is_configured": true, 00:21:13.917 "data_offset": 2048, 00:21:13.917 "data_size": 63488 00:21:13.917 }, 00:21:13.917 { 00:21:13.917 "name": "BaseBdev3", 00:21:13.917 "uuid": "7d0b495b-71d9-4787-a8e7-b2ca2d7c801b", 00:21:13.917 "is_configured": true, 00:21:13.917 "data_offset": 2048, 00:21:13.917 "data_size": 63488 00:21:13.917 } 00:21:13.917 ] 00:21:13.917 }' 00:21:13.917 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:13.917 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.174 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.174 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:14.174 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.174 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.174 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.174 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:21:14.174 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.174 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.174 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.174 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:14.174 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.174 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 58284348-e071-4e97-a8b5-8c0796b17798 00:21:14.174 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.174 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.433 [2024-12-05 12:52:56.778614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:14.433 [2024-12-05 12:52:56.778808] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:14.433 [2024-12-05 12:52:56.778819] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:14.433 NewBaseBdev 00:21:14.433 [2024-12-05 12:52:56.779088] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:14.433 [2024-12-05 12:52:56.779233] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:14.433 [2024-12-05 12:52:56.779255] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:21:14.433 [2024-12-05 12:52:56.779384] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:14.433 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.433 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:21:14.433 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:21:14.433 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:14.433 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:14.433 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:14.433 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:14.433 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:14.433 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.433 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.433 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.433 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:14.433 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.433 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.433 [ 00:21:14.433 { 00:21:14.433 "name": "NewBaseBdev", 00:21:14.433 "aliases": [ 00:21:14.433 "58284348-e071-4e97-a8b5-8c0796b17798" 00:21:14.433 ], 00:21:14.433 "product_name": "Malloc disk", 00:21:14.433 "block_size": 512, 00:21:14.433 "num_blocks": 65536, 00:21:14.433 "uuid": "58284348-e071-4e97-a8b5-8c0796b17798", 00:21:14.433 "assigned_rate_limits": { 00:21:14.433 "rw_ios_per_sec": 0, 00:21:14.433 "rw_mbytes_per_sec": 0, 00:21:14.433 "r_mbytes_per_sec": 0, 00:21:14.433 "w_mbytes_per_sec": 0 00:21:14.433 }, 00:21:14.433 "claimed": true, 00:21:14.433 "claim_type": "exclusive_write", 00:21:14.433 "zoned": false, 00:21:14.433 "supported_io_types": { 00:21:14.433 "read": true, 00:21:14.433 "write": true, 00:21:14.433 "unmap": true, 00:21:14.433 "flush": true, 00:21:14.433 "reset": true, 00:21:14.433 "nvme_admin": false, 00:21:14.433 "nvme_io": false, 00:21:14.433 "nvme_io_md": false, 00:21:14.433 "write_zeroes": true, 00:21:14.433 "zcopy": true, 00:21:14.433 "get_zone_info": false, 00:21:14.433 "zone_management": false, 00:21:14.433 "zone_append": false, 00:21:14.433 "compare": false, 00:21:14.433 "compare_and_write": false, 00:21:14.433 "abort": true, 00:21:14.433 "seek_hole": false, 00:21:14.433 "seek_data": false, 00:21:14.433 "copy": true, 00:21:14.433 "nvme_iov_md": false 00:21:14.433 }, 00:21:14.433 "memory_domains": [ 00:21:14.433 { 00:21:14.433 "dma_device_id": "system", 00:21:14.433 "dma_device_type": 1 00:21:14.433 }, 00:21:14.433 { 00:21:14.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.433 "dma_device_type": 2 00:21:14.433 } 00:21:14.433 ], 00:21:14.433 "driver_specific": {} 00:21:14.433 } 00:21:14.433 ] 00:21:14.433 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.433 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:14.433 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:21:14.433 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:14.433 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:14.433 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:14.433 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:14.433 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:14.433 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:14.433 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:14.433 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:14.433 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:14.433 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.433 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:14.433 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.433 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.433 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.433 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:14.433 "name": "Existed_Raid", 00:21:14.433 "uuid": "670f3b4c-dc43-4d2d-93e0-d07fa9466de3", 00:21:14.433 "strip_size_kb": 0, 00:21:14.433 "state": "online", 00:21:14.433 "raid_level": "raid1", 00:21:14.433 "superblock": true, 00:21:14.433 "num_base_bdevs": 3, 00:21:14.433 "num_base_bdevs_discovered": 3, 00:21:14.433 "num_base_bdevs_operational": 3, 00:21:14.433 "base_bdevs_list": [ 00:21:14.433 { 00:21:14.433 "name": "NewBaseBdev", 00:21:14.433 "uuid": "58284348-e071-4e97-a8b5-8c0796b17798", 00:21:14.433 "is_configured": true, 00:21:14.433 "data_offset": 2048, 00:21:14.433 "data_size": 63488 00:21:14.433 }, 00:21:14.433 { 00:21:14.433 "name": "BaseBdev2", 00:21:14.433 "uuid": "681b321e-db22-4456-9eb9-1701a7e3b797", 00:21:14.433 "is_configured": true, 00:21:14.433 "data_offset": 2048, 00:21:14.433 "data_size": 63488 00:21:14.433 }, 00:21:14.433 { 00:21:14.433 "name": "BaseBdev3", 00:21:14.433 "uuid": "7d0b495b-71d9-4787-a8e7-b2ca2d7c801b", 00:21:14.433 "is_configured": true, 00:21:14.433 "data_offset": 2048, 00:21:14.433 "data_size": 63488 00:21:14.433 } 00:21:14.433 ] 00:21:14.433 }' 00:21:14.433 12:52:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:14.433 12:52:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.693 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:21:14.693 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:14.693 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:14.693 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:14.693 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:21:14.693 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:14.693 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:14.693 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.693 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:14.693 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.693 [2024-12-05 12:52:57.103058] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:14.693 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.693 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:14.693 "name": "Existed_Raid", 00:21:14.693 "aliases": [ 00:21:14.693 "670f3b4c-dc43-4d2d-93e0-d07fa9466de3" 00:21:14.693 ], 00:21:14.693 "product_name": "Raid Volume", 00:21:14.693 "block_size": 512, 00:21:14.693 "num_blocks": 63488, 00:21:14.693 "uuid": "670f3b4c-dc43-4d2d-93e0-d07fa9466de3", 00:21:14.693 "assigned_rate_limits": { 00:21:14.693 "rw_ios_per_sec": 0, 00:21:14.693 "rw_mbytes_per_sec": 0, 00:21:14.693 "r_mbytes_per_sec": 0, 00:21:14.693 "w_mbytes_per_sec": 0 00:21:14.694 }, 00:21:14.694 "claimed": false, 00:21:14.694 "zoned": false, 00:21:14.694 "supported_io_types": { 00:21:14.694 "read": true, 00:21:14.694 "write": true, 00:21:14.694 "unmap": false, 00:21:14.694 "flush": false, 00:21:14.694 "reset": true, 00:21:14.694 "nvme_admin": false, 00:21:14.694 "nvme_io": false, 00:21:14.694 "nvme_io_md": false, 00:21:14.694 "write_zeroes": true, 00:21:14.694 "zcopy": false, 00:21:14.694 "get_zone_info": false, 00:21:14.694 "zone_management": false, 00:21:14.694 "zone_append": false, 00:21:14.694 "compare": false, 00:21:14.694 "compare_and_write": false, 00:21:14.694 "abort": false, 00:21:14.694 "seek_hole": false, 00:21:14.694 "seek_data": false, 00:21:14.694 "copy": false, 00:21:14.694 "nvme_iov_md": false 00:21:14.694 }, 00:21:14.694 "memory_domains": [ 00:21:14.694 { 00:21:14.694 "dma_device_id": "system", 00:21:14.694 "dma_device_type": 1 00:21:14.694 }, 00:21:14.694 { 00:21:14.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.694 "dma_device_type": 2 00:21:14.694 }, 00:21:14.694 { 00:21:14.694 "dma_device_id": "system", 00:21:14.694 "dma_device_type": 1 00:21:14.694 }, 00:21:14.694 { 00:21:14.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.694 "dma_device_type": 2 00:21:14.694 }, 00:21:14.694 { 00:21:14.694 "dma_device_id": "system", 00:21:14.694 "dma_device_type": 1 00:21:14.694 }, 00:21:14.694 { 00:21:14.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.694 "dma_device_type": 2 00:21:14.694 } 00:21:14.694 ], 00:21:14.694 "driver_specific": { 00:21:14.694 "raid": { 00:21:14.694 "uuid": "670f3b4c-dc43-4d2d-93e0-d07fa9466de3", 00:21:14.694 "strip_size_kb": 0, 00:21:14.694 "state": "online", 00:21:14.694 "raid_level": "raid1", 00:21:14.694 "superblock": true, 00:21:14.694 "num_base_bdevs": 3, 00:21:14.694 "num_base_bdevs_discovered": 3, 00:21:14.694 "num_base_bdevs_operational": 3, 00:21:14.694 "base_bdevs_list": [ 00:21:14.694 { 00:21:14.694 "name": "NewBaseBdev", 00:21:14.694 "uuid": "58284348-e071-4e97-a8b5-8c0796b17798", 00:21:14.694 "is_configured": true, 00:21:14.694 "data_offset": 2048, 00:21:14.694 "data_size": 63488 00:21:14.694 }, 00:21:14.694 { 00:21:14.694 "name": "BaseBdev2", 00:21:14.694 "uuid": "681b321e-db22-4456-9eb9-1701a7e3b797", 00:21:14.694 "is_configured": true, 00:21:14.694 "data_offset": 2048, 00:21:14.694 "data_size": 63488 00:21:14.694 }, 00:21:14.694 { 00:21:14.694 "name": "BaseBdev3", 00:21:14.694 "uuid": "7d0b495b-71d9-4787-a8e7-b2ca2d7c801b", 00:21:14.694 "is_configured": true, 00:21:14.694 "data_offset": 2048, 00:21:14.694 "data_size": 63488 00:21:14.694 } 00:21:14.694 ] 00:21:14.694 } 00:21:14.694 } 00:21:14.694 }' 00:21:14.694 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:14.694 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:21:14.694 BaseBdev2 00:21:14.694 BaseBdev3' 00:21:14.694 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:14.694 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:14.694 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:14.694 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:21:14.694 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:14.694 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.694 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.694 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.694 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:14.694 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:14.694 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:14.694 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:14.694 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.694 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.694 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:14.694 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.694 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:14.694 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:14.694 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:14.694 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:14.694 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:14.694 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.694 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.955 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.955 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:14.955 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:14.955 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:14.955 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.955 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.955 [2024-12-05 12:52:57.302779] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:14.955 [2024-12-05 12:52:57.302812] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:14.955 [2024-12-05 12:52:57.302875] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:14.955 [2024-12-05 12:52:57.303149] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:14.955 [2024-12-05 12:52:57.303165] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:21:14.955 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.955 12:52:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66225 00:21:14.955 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66225 ']' 00:21:14.955 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66225 00:21:14.956 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:21:14.956 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:14.956 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66225 00:21:14.956 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:14.956 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:14.956 killing process with pid 66225 00:21:14.956 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66225' 00:21:14.956 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66225 00:21:14.956 [2024-12-05 12:52:57.330962] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:14.956 12:52:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66225 00:21:14.956 [2024-12-05 12:52:57.519666] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:15.896 12:52:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:21:15.896 00:21:15.896 real 0m7.451s 00:21:15.896 user 0m11.895s 00:21:15.896 sys 0m1.196s 00:21:15.896 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:15.896 12:52:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.896 ************************************ 00:21:15.896 END TEST raid_state_function_test_sb 00:21:15.896 ************************************ 00:21:15.896 12:52:58 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:21:15.896 12:52:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:15.896 12:52:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:15.896 12:52:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:15.896 ************************************ 00:21:15.896 START TEST raid_superblock_test 00:21:15.896 ************************************ 00:21:15.896 12:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:21:15.896 12:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:21:15.896 12:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:21:15.896 12:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:15.896 12:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:15.896 12:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:15.896 12:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:15.896 12:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:15.896 12:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:15.896 12:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:15.896 12:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:15.896 12:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:15.896 12:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:15.896 12:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:15.896 12:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:21:15.896 12:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:21:15.896 12:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66818 00:21:15.896 12:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66818 00:21:15.896 12:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66818 ']' 00:21:15.896 12:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.896 12:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:15.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.896 12:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.896 12:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:15.896 12:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:15.896 12:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.896 [2024-12-05 12:52:58.281268] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:21:15.896 [2024-12-05 12:52:58.281394] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66818 ] 00:21:15.896 [2024-12-05 12:52:58.443391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.156 [2024-12-05 12:52:58.543990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.156 [2024-12-05 12:52:58.679249] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:16.156 [2024-12-05 12:52:58.679305] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.728 malloc1 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.728 [2024-12-05 12:52:59.152365] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:16.728 [2024-12-05 12:52:59.152419] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:16.728 [2024-12-05 12:52:59.152439] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:16.728 [2024-12-05 12:52:59.152448] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:16.728 [2024-12-05 12:52:59.154601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:16.728 [2024-12-05 12:52:59.154633] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:16.728 pt1 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.728 malloc2 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.728 [2024-12-05 12:52:59.188402] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:16.728 [2024-12-05 12:52:59.188453] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:16.728 [2024-12-05 12:52:59.188475] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:16.728 [2024-12-05 12:52:59.188484] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:16.728 [2024-12-05 12:52:59.190595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:16.728 [2024-12-05 12:52:59.190625] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:16.728 pt2 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.728 malloc3 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.728 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:16.729 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.729 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.729 [2024-12-05 12:52:59.243853] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:16.729 [2024-12-05 12:52:59.243905] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:16.729 [2024-12-05 12:52:59.243926] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:16.729 [2024-12-05 12:52:59.243935] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:16.729 [2024-12-05 12:52:59.246021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:16.729 [2024-12-05 12:52:59.246051] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:16.729 pt3 00:21:16.729 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.729 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:16.729 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:16.729 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:21:16.729 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.729 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.729 [2024-12-05 12:52:59.251893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:16.729 [2024-12-05 12:52:59.253724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:16.729 [2024-12-05 12:52:59.253791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:16.729 [2024-12-05 12:52:59.253946] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:16.729 [2024-12-05 12:52:59.253969] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:16.729 [2024-12-05 12:52:59.254211] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:16.729 [2024-12-05 12:52:59.254366] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:16.729 [2024-12-05 12:52:59.254384] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:16.729 [2024-12-05 12:52:59.254538] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:16.729 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.729 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:16.729 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:16.729 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:16.729 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:16.729 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:16.729 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:16.729 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:16.729 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:16.729 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:16.729 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:16.729 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.729 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.729 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.729 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.729 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.729 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:16.729 "name": "raid_bdev1", 00:21:16.729 "uuid": "595e0e40-51bc-4c21-b521-f83729645579", 00:21:16.729 "strip_size_kb": 0, 00:21:16.729 "state": "online", 00:21:16.729 "raid_level": "raid1", 00:21:16.729 "superblock": true, 00:21:16.729 "num_base_bdevs": 3, 00:21:16.729 "num_base_bdevs_discovered": 3, 00:21:16.729 "num_base_bdevs_operational": 3, 00:21:16.729 "base_bdevs_list": [ 00:21:16.729 { 00:21:16.729 "name": "pt1", 00:21:16.729 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:16.729 "is_configured": true, 00:21:16.729 "data_offset": 2048, 00:21:16.729 "data_size": 63488 00:21:16.729 }, 00:21:16.729 { 00:21:16.729 "name": "pt2", 00:21:16.729 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:16.729 "is_configured": true, 00:21:16.729 "data_offset": 2048, 00:21:16.729 "data_size": 63488 00:21:16.729 }, 00:21:16.729 { 00:21:16.729 "name": "pt3", 00:21:16.729 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:16.729 "is_configured": true, 00:21:16.729 "data_offset": 2048, 00:21:16.729 "data_size": 63488 00:21:16.729 } 00:21:16.729 ] 00:21:16.729 }' 00:21:16.729 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:16.729 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.990 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:16.990 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:16.990 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:16.990 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:16.990 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:16.990 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:16.990 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:16.990 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:16.990 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.990 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.990 [2024-12-05 12:52:59.568270] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:17.250 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.250 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:17.250 "name": "raid_bdev1", 00:21:17.250 "aliases": [ 00:21:17.250 "595e0e40-51bc-4c21-b521-f83729645579" 00:21:17.250 ], 00:21:17.250 "product_name": "Raid Volume", 00:21:17.250 "block_size": 512, 00:21:17.250 "num_blocks": 63488, 00:21:17.250 "uuid": "595e0e40-51bc-4c21-b521-f83729645579", 00:21:17.250 "assigned_rate_limits": { 00:21:17.250 "rw_ios_per_sec": 0, 00:21:17.250 "rw_mbytes_per_sec": 0, 00:21:17.250 "r_mbytes_per_sec": 0, 00:21:17.250 "w_mbytes_per_sec": 0 00:21:17.250 }, 00:21:17.250 "claimed": false, 00:21:17.250 "zoned": false, 00:21:17.250 "supported_io_types": { 00:21:17.250 "read": true, 00:21:17.250 "write": true, 00:21:17.250 "unmap": false, 00:21:17.250 "flush": false, 00:21:17.250 "reset": true, 00:21:17.250 "nvme_admin": false, 00:21:17.250 "nvme_io": false, 00:21:17.250 "nvme_io_md": false, 00:21:17.250 "write_zeroes": true, 00:21:17.250 "zcopy": false, 00:21:17.250 "get_zone_info": false, 00:21:17.250 "zone_management": false, 00:21:17.250 "zone_append": false, 00:21:17.250 "compare": false, 00:21:17.250 "compare_and_write": false, 00:21:17.250 "abort": false, 00:21:17.250 "seek_hole": false, 00:21:17.250 "seek_data": false, 00:21:17.250 "copy": false, 00:21:17.250 "nvme_iov_md": false 00:21:17.250 }, 00:21:17.250 "memory_domains": [ 00:21:17.250 { 00:21:17.250 "dma_device_id": "system", 00:21:17.250 "dma_device_type": 1 00:21:17.250 }, 00:21:17.250 { 00:21:17.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:17.250 "dma_device_type": 2 00:21:17.250 }, 00:21:17.250 { 00:21:17.250 "dma_device_id": "system", 00:21:17.250 "dma_device_type": 1 00:21:17.250 }, 00:21:17.251 { 00:21:17.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:17.251 "dma_device_type": 2 00:21:17.251 }, 00:21:17.251 { 00:21:17.251 "dma_device_id": "system", 00:21:17.251 "dma_device_type": 1 00:21:17.251 }, 00:21:17.251 { 00:21:17.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:17.251 "dma_device_type": 2 00:21:17.251 } 00:21:17.251 ], 00:21:17.251 "driver_specific": { 00:21:17.251 "raid": { 00:21:17.251 "uuid": "595e0e40-51bc-4c21-b521-f83729645579", 00:21:17.251 "strip_size_kb": 0, 00:21:17.251 "state": "online", 00:21:17.251 "raid_level": "raid1", 00:21:17.251 "superblock": true, 00:21:17.251 "num_base_bdevs": 3, 00:21:17.251 "num_base_bdevs_discovered": 3, 00:21:17.251 "num_base_bdevs_operational": 3, 00:21:17.251 "base_bdevs_list": [ 00:21:17.251 { 00:21:17.251 "name": "pt1", 00:21:17.251 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:17.251 "is_configured": true, 00:21:17.251 "data_offset": 2048, 00:21:17.251 "data_size": 63488 00:21:17.251 }, 00:21:17.251 { 00:21:17.251 "name": "pt2", 00:21:17.251 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:17.251 "is_configured": true, 00:21:17.251 "data_offset": 2048, 00:21:17.251 "data_size": 63488 00:21:17.251 }, 00:21:17.251 { 00:21:17.251 "name": "pt3", 00:21:17.251 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:17.251 "is_configured": true, 00:21:17.251 "data_offset": 2048, 00:21:17.251 "data_size": 63488 00:21:17.251 } 00:21:17.251 ] 00:21:17.251 } 00:21:17.251 } 00:21:17.251 }' 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:17.251 pt2 00:21:17.251 pt3' 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.251 [2024-12-05 12:52:59.748267] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=595e0e40-51bc-4c21-b521-f83729645579 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 595e0e40-51bc-4c21-b521-f83729645579 ']' 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.251 [2024-12-05 12:52:59.775983] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:17.251 [2024-12-05 12:52:59.776010] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:17.251 [2024-12-05 12:52:59.776076] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:17.251 [2024-12-05 12:52:59.776151] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:17.251 [2024-12-05 12:52:59.776162] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.251 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.511 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.511 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:17.511 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.511 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:17.511 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.511 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.511 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:17.511 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:21:17.511 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:21:17.511 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:21:17.511 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:17.511 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:17.511 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:17.511 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:17.511 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:21:17.511 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.512 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.512 [2024-12-05 12:52:59.880052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:17.512 [2024-12-05 12:52:59.881900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:17.512 [2024-12-05 12:52:59.881958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:17.512 [2024-12-05 12:52:59.882004] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:17.512 [2024-12-05 12:52:59.882047] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:17.512 [2024-12-05 12:52:59.882066] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:21:17.512 [2024-12-05 12:52:59.882082] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:17.512 [2024-12-05 12:52:59.882090] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:17.512 request: 00:21:17.512 { 00:21:17.512 "name": "raid_bdev1", 00:21:17.512 "raid_level": "raid1", 00:21:17.512 "base_bdevs": [ 00:21:17.512 "malloc1", 00:21:17.512 "malloc2", 00:21:17.512 "malloc3" 00:21:17.512 ], 00:21:17.512 "superblock": false, 00:21:17.512 "method": "bdev_raid_create", 00:21:17.512 "req_id": 1 00:21:17.512 } 00:21:17.512 Got JSON-RPC error response 00:21:17.512 response: 00:21:17.512 { 00:21:17.512 "code": -17, 00:21:17.512 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:17.512 } 00:21:17.512 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:17.512 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:21:17.512 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:17.512 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:17.512 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:17.512 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.512 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:17.512 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.512 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.512 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.512 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:17.512 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:17.512 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:17.512 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.512 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.512 [2024-12-05 12:52:59.920020] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:17.512 [2024-12-05 12:52:59.920066] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:17.512 [2024-12-05 12:52:59.920083] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:17.512 [2024-12-05 12:52:59.920091] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:17.512 [2024-12-05 12:52:59.922228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:17.512 [2024-12-05 12:52:59.922259] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:17.512 [2024-12-05 12:52:59.922334] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:17.512 [2024-12-05 12:52:59.922378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:17.512 pt1 00:21:17.512 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.512 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:21:17.512 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:17.512 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:17.512 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:17.512 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:17.512 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:17.512 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:17.512 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:17.512 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:17.512 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:17.512 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.512 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.512 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.512 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.512 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.512 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:17.512 "name": "raid_bdev1", 00:21:17.512 "uuid": "595e0e40-51bc-4c21-b521-f83729645579", 00:21:17.512 "strip_size_kb": 0, 00:21:17.512 "state": "configuring", 00:21:17.512 "raid_level": "raid1", 00:21:17.512 "superblock": true, 00:21:17.512 "num_base_bdevs": 3, 00:21:17.512 "num_base_bdevs_discovered": 1, 00:21:17.512 "num_base_bdevs_operational": 3, 00:21:17.512 "base_bdevs_list": [ 00:21:17.512 { 00:21:17.512 "name": "pt1", 00:21:17.512 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:17.512 "is_configured": true, 00:21:17.512 "data_offset": 2048, 00:21:17.512 "data_size": 63488 00:21:17.512 }, 00:21:17.512 { 00:21:17.512 "name": null, 00:21:17.512 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:17.512 "is_configured": false, 00:21:17.512 "data_offset": 2048, 00:21:17.512 "data_size": 63488 00:21:17.512 }, 00:21:17.512 { 00:21:17.512 "name": null, 00:21:17.512 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:17.512 "is_configured": false, 00:21:17.512 "data_offset": 2048, 00:21:17.512 "data_size": 63488 00:21:17.512 } 00:21:17.512 ] 00:21:17.512 }' 00:21:17.512 12:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:17.512 12:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.772 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:21:17.772 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:17.772 12:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.772 12:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.772 [2024-12-05 12:53:00.236115] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:17.772 [2024-12-05 12:53:00.236167] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:17.772 [2024-12-05 12:53:00.236186] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:21:17.772 [2024-12-05 12:53:00.236194] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:17.772 [2024-12-05 12:53:00.236599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:17.772 [2024-12-05 12:53:00.236613] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:17.772 [2024-12-05 12:53:00.236683] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:17.772 [2024-12-05 12:53:00.236702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:17.772 pt2 00:21:17.772 12:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.772 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:21:17.772 12:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.772 12:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.772 [2024-12-05 12:53:00.244128] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:17.772 12:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.772 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:21:17.772 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:17.772 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:17.772 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:17.772 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:17.772 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:17.772 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:17.772 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:17.772 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:17.772 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:17.772 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.772 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.772 12:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.772 12:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.772 12:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.772 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:17.772 "name": "raid_bdev1", 00:21:17.772 "uuid": "595e0e40-51bc-4c21-b521-f83729645579", 00:21:17.772 "strip_size_kb": 0, 00:21:17.772 "state": "configuring", 00:21:17.772 "raid_level": "raid1", 00:21:17.772 "superblock": true, 00:21:17.772 "num_base_bdevs": 3, 00:21:17.772 "num_base_bdevs_discovered": 1, 00:21:17.772 "num_base_bdevs_operational": 3, 00:21:17.772 "base_bdevs_list": [ 00:21:17.772 { 00:21:17.772 "name": "pt1", 00:21:17.772 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:17.772 "is_configured": true, 00:21:17.772 "data_offset": 2048, 00:21:17.772 "data_size": 63488 00:21:17.772 }, 00:21:17.772 { 00:21:17.772 "name": null, 00:21:17.772 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:17.772 "is_configured": false, 00:21:17.772 "data_offset": 0, 00:21:17.772 "data_size": 63488 00:21:17.772 }, 00:21:17.772 { 00:21:17.772 "name": null, 00:21:17.772 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:17.772 "is_configured": false, 00:21:17.772 "data_offset": 2048, 00:21:17.772 "data_size": 63488 00:21:17.772 } 00:21:17.772 ] 00:21:17.772 }' 00:21:17.772 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:17.772 12:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.033 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:18.033 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:18.033 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:18.033 12:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.033 12:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.033 [2024-12-05 12:53:00.580183] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:18.033 [2024-12-05 12:53:00.580238] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:18.034 [2024-12-05 12:53:00.580255] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:21:18.034 [2024-12-05 12:53:00.580265] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:18.034 [2024-12-05 12:53:00.580684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:18.034 [2024-12-05 12:53:00.580705] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:18.034 [2024-12-05 12:53:00.580770] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:18.034 [2024-12-05 12:53:00.580796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:18.034 pt2 00:21:18.034 12:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.034 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:18.034 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:18.034 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:18.034 12:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.034 12:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.034 [2024-12-05 12:53:00.588182] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:18.034 [2024-12-05 12:53:00.588222] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:18.034 [2024-12-05 12:53:00.588234] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:18.034 [2024-12-05 12:53:00.588243] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:18.034 [2024-12-05 12:53:00.588620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:18.034 [2024-12-05 12:53:00.588644] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:18.034 [2024-12-05 12:53:00.588701] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:18.034 [2024-12-05 12:53:00.588719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:18.034 [2024-12-05 12:53:00.588835] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:18.034 [2024-12-05 12:53:00.588852] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:18.034 [2024-12-05 12:53:00.589076] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:18.034 [2024-12-05 12:53:00.589215] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:18.034 [2024-12-05 12:53:00.589230] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:18.034 [2024-12-05 12:53:00.589354] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:18.034 pt3 00:21:18.034 12:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.034 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:18.034 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:18.034 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:18.034 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:18.034 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:18.034 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:18.034 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:18.034 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:18.034 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:18.034 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:18.034 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:18.034 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:18.034 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.034 12:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.034 12:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.034 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.034 12:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.293 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:18.293 "name": "raid_bdev1", 00:21:18.293 "uuid": "595e0e40-51bc-4c21-b521-f83729645579", 00:21:18.293 "strip_size_kb": 0, 00:21:18.293 "state": "online", 00:21:18.293 "raid_level": "raid1", 00:21:18.293 "superblock": true, 00:21:18.293 "num_base_bdevs": 3, 00:21:18.293 "num_base_bdevs_discovered": 3, 00:21:18.293 "num_base_bdevs_operational": 3, 00:21:18.293 "base_bdevs_list": [ 00:21:18.293 { 00:21:18.293 "name": "pt1", 00:21:18.293 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:18.293 "is_configured": true, 00:21:18.293 "data_offset": 2048, 00:21:18.293 "data_size": 63488 00:21:18.293 }, 00:21:18.293 { 00:21:18.293 "name": "pt2", 00:21:18.293 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:18.293 "is_configured": true, 00:21:18.293 "data_offset": 2048, 00:21:18.293 "data_size": 63488 00:21:18.293 }, 00:21:18.293 { 00:21:18.293 "name": "pt3", 00:21:18.293 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:18.293 "is_configured": true, 00:21:18.293 "data_offset": 2048, 00:21:18.293 "data_size": 63488 00:21:18.293 } 00:21:18.293 ] 00:21:18.293 }' 00:21:18.293 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:18.293 12:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.553 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:18.553 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:18.553 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:18.553 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:18.553 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:18.553 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:18.553 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:18.553 12:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.553 12:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.553 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:18.553 [2024-12-05 12:53:00.896598] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:18.553 12:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.553 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:18.553 "name": "raid_bdev1", 00:21:18.553 "aliases": [ 00:21:18.553 "595e0e40-51bc-4c21-b521-f83729645579" 00:21:18.553 ], 00:21:18.553 "product_name": "Raid Volume", 00:21:18.553 "block_size": 512, 00:21:18.553 "num_blocks": 63488, 00:21:18.553 "uuid": "595e0e40-51bc-4c21-b521-f83729645579", 00:21:18.553 "assigned_rate_limits": { 00:21:18.553 "rw_ios_per_sec": 0, 00:21:18.553 "rw_mbytes_per_sec": 0, 00:21:18.553 "r_mbytes_per_sec": 0, 00:21:18.553 "w_mbytes_per_sec": 0 00:21:18.553 }, 00:21:18.553 "claimed": false, 00:21:18.553 "zoned": false, 00:21:18.553 "supported_io_types": { 00:21:18.553 "read": true, 00:21:18.553 "write": true, 00:21:18.553 "unmap": false, 00:21:18.553 "flush": false, 00:21:18.553 "reset": true, 00:21:18.553 "nvme_admin": false, 00:21:18.553 "nvme_io": false, 00:21:18.553 "nvme_io_md": false, 00:21:18.553 "write_zeroes": true, 00:21:18.553 "zcopy": false, 00:21:18.553 "get_zone_info": false, 00:21:18.553 "zone_management": false, 00:21:18.553 "zone_append": false, 00:21:18.553 "compare": false, 00:21:18.553 "compare_and_write": false, 00:21:18.553 "abort": false, 00:21:18.553 "seek_hole": false, 00:21:18.553 "seek_data": false, 00:21:18.553 "copy": false, 00:21:18.553 "nvme_iov_md": false 00:21:18.553 }, 00:21:18.553 "memory_domains": [ 00:21:18.553 { 00:21:18.553 "dma_device_id": "system", 00:21:18.553 "dma_device_type": 1 00:21:18.553 }, 00:21:18.553 { 00:21:18.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:18.553 "dma_device_type": 2 00:21:18.553 }, 00:21:18.553 { 00:21:18.553 "dma_device_id": "system", 00:21:18.553 "dma_device_type": 1 00:21:18.553 }, 00:21:18.553 { 00:21:18.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:18.553 "dma_device_type": 2 00:21:18.553 }, 00:21:18.553 { 00:21:18.553 "dma_device_id": "system", 00:21:18.553 "dma_device_type": 1 00:21:18.553 }, 00:21:18.553 { 00:21:18.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:18.553 "dma_device_type": 2 00:21:18.553 } 00:21:18.553 ], 00:21:18.553 "driver_specific": { 00:21:18.553 "raid": { 00:21:18.553 "uuid": "595e0e40-51bc-4c21-b521-f83729645579", 00:21:18.553 "strip_size_kb": 0, 00:21:18.553 "state": "online", 00:21:18.553 "raid_level": "raid1", 00:21:18.553 "superblock": true, 00:21:18.553 "num_base_bdevs": 3, 00:21:18.553 "num_base_bdevs_discovered": 3, 00:21:18.553 "num_base_bdevs_operational": 3, 00:21:18.553 "base_bdevs_list": [ 00:21:18.553 { 00:21:18.553 "name": "pt1", 00:21:18.553 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:18.553 "is_configured": true, 00:21:18.553 "data_offset": 2048, 00:21:18.553 "data_size": 63488 00:21:18.553 }, 00:21:18.553 { 00:21:18.553 "name": "pt2", 00:21:18.553 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:18.553 "is_configured": true, 00:21:18.553 "data_offset": 2048, 00:21:18.553 "data_size": 63488 00:21:18.553 }, 00:21:18.553 { 00:21:18.553 "name": "pt3", 00:21:18.553 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:18.553 "is_configured": true, 00:21:18.553 "data_offset": 2048, 00:21:18.553 "data_size": 63488 00:21:18.553 } 00:21:18.553 ] 00:21:18.553 } 00:21:18.553 } 00:21:18.553 }' 00:21:18.553 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:18.553 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:18.553 pt2 00:21:18.553 pt3' 00:21:18.553 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:18.553 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:18.553 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:18.553 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:18.553 12:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.553 12:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.553 12:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.553 [2024-12-05 12:53:01.092614] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 595e0e40-51bc-4c21-b521-f83729645579 '!=' 595e0e40-51bc-4c21-b521-f83729645579 ']' 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.553 [2024-12-05 12:53:01.116352] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.553 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.814 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.814 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:18.814 "name": "raid_bdev1", 00:21:18.814 "uuid": "595e0e40-51bc-4c21-b521-f83729645579", 00:21:18.814 "strip_size_kb": 0, 00:21:18.814 "state": "online", 00:21:18.814 "raid_level": "raid1", 00:21:18.814 "superblock": true, 00:21:18.814 "num_base_bdevs": 3, 00:21:18.814 "num_base_bdevs_discovered": 2, 00:21:18.814 "num_base_bdevs_operational": 2, 00:21:18.814 "base_bdevs_list": [ 00:21:18.814 { 00:21:18.814 "name": null, 00:21:18.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.814 "is_configured": false, 00:21:18.814 "data_offset": 0, 00:21:18.814 "data_size": 63488 00:21:18.814 }, 00:21:18.814 { 00:21:18.814 "name": "pt2", 00:21:18.814 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:18.814 "is_configured": true, 00:21:18.814 "data_offset": 2048, 00:21:18.814 "data_size": 63488 00:21:18.814 }, 00:21:18.814 { 00:21:18.814 "name": "pt3", 00:21:18.814 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:18.814 "is_configured": true, 00:21:18.814 "data_offset": 2048, 00:21:18.814 "data_size": 63488 00:21:18.814 } 00:21:18.814 ] 00:21:18.814 }' 00:21:18.814 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:18.814 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.074 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:19.074 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.074 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.074 [2024-12-05 12:53:01.416395] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:19.074 [2024-12-05 12:53:01.416424] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:19.074 [2024-12-05 12:53:01.416486] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:19.074 [2024-12-05 12:53:01.416554] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:19.074 [2024-12-05 12:53:01.416568] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:19.074 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.074 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:19.074 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.074 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.074 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.074 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.074 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:19.074 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:19.074 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:19.074 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:19.074 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:21:19.074 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.074 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.074 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.074 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:19.074 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:19.074 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:21:19.074 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.074 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.074 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.074 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:19.074 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:19.074 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:19.074 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:19.074 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:19.074 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.074 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.074 [2024-12-05 12:53:01.476420] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:19.074 [2024-12-05 12:53:01.476473] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:19.074 [2024-12-05 12:53:01.476498] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:21:19.074 [2024-12-05 12:53:01.476509] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:19.074 [2024-12-05 12:53:01.478781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:19.074 [2024-12-05 12:53:01.478814] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:19.074 [2024-12-05 12:53:01.478883] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:19.074 [2024-12-05 12:53:01.478927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:19.074 pt2 00:21:19.074 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.074 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:19.074 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:19.074 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:19.075 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:19.075 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:19.075 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:19.075 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:19.075 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:19.075 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:19.075 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:19.075 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.075 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.075 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.075 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.075 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.075 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:19.075 "name": "raid_bdev1", 00:21:19.075 "uuid": "595e0e40-51bc-4c21-b521-f83729645579", 00:21:19.075 "strip_size_kb": 0, 00:21:19.075 "state": "configuring", 00:21:19.075 "raid_level": "raid1", 00:21:19.075 "superblock": true, 00:21:19.075 "num_base_bdevs": 3, 00:21:19.075 "num_base_bdevs_discovered": 1, 00:21:19.075 "num_base_bdevs_operational": 2, 00:21:19.075 "base_bdevs_list": [ 00:21:19.075 { 00:21:19.075 "name": null, 00:21:19.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.075 "is_configured": false, 00:21:19.075 "data_offset": 2048, 00:21:19.075 "data_size": 63488 00:21:19.075 }, 00:21:19.075 { 00:21:19.075 "name": "pt2", 00:21:19.075 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:19.075 "is_configured": true, 00:21:19.075 "data_offset": 2048, 00:21:19.075 "data_size": 63488 00:21:19.075 }, 00:21:19.075 { 00:21:19.075 "name": null, 00:21:19.075 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:19.075 "is_configured": false, 00:21:19.075 "data_offset": 2048, 00:21:19.075 "data_size": 63488 00:21:19.075 } 00:21:19.075 ] 00:21:19.075 }' 00:21:19.075 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:19.075 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.335 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:21:19.335 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:19.335 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:21:19.335 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:19.335 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.335 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.335 [2024-12-05 12:53:01.784512] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:19.335 [2024-12-05 12:53:01.784569] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:19.335 [2024-12-05 12:53:01.784586] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:19.335 [2024-12-05 12:53:01.784597] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:19.335 [2024-12-05 12:53:01.785000] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:19.335 [2024-12-05 12:53:01.785022] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:19.335 [2024-12-05 12:53:01.785095] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:19.335 [2024-12-05 12:53:01.785125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:19.335 [2024-12-05 12:53:01.785228] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:19.335 [2024-12-05 12:53:01.785239] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:19.335 [2024-12-05 12:53:01.785485] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:19.335 [2024-12-05 12:53:01.785638] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:19.335 [2024-12-05 12:53:01.785654] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:19.335 [2024-12-05 12:53:01.785779] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:19.335 pt3 00:21:19.335 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.335 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:19.335 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:19.335 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:19.335 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:19.335 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:19.335 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:19.335 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:19.335 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:19.335 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:19.335 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:19.335 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.335 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.335 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.335 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.335 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.335 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:19.335 "name": "raid_bdev1", 00:21:19.335 "uuid": "595e0e40-51bc-4c21-b521-f83729645579", 00:21:19.335 "strip_size_kb": 0, 00:21:19.335 "state": "online", 00:21:19.335 "raid_level": "raid1", 00:21:19.335 "superblock": true, 00:21:19.335 "num_base_bdevs": 3, 00:21:19.335 "num_base_bdevs_discovered": 2, 00:21:19.335 "num_base_bdevs_operational": 2, 00:21:19.335 "base_bdevs_list": [ 00:21:19.335 { 00:21:19.335 "name": null, 00:21:19.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.335 "is_configured": false, 00:21:19.335 "data_offset": 2048, 00:21:19.335 "data_size": 63488 00:21:19.335 }, 00:21:19.335 { 00:21:19.335 "name": "pt2", 00:21:19.335 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:19.335 "is_configured": true, 00:21:19.335 "data_offset": 2048, 00:21:19.335 "data_size": 63488 00:21:19.335 }, 00:21:19.335 { 00:21:19.335 "name": "pt3", 00:21:19.335 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:19.335 "is_configured": true, 00:21:19.335 "data_offset": 2048, 00:21:19.335 "data_size": 63488 00:21:19.335 } 00:21:19.335 ] 00:21:19.335 }' 00:21:19.335 12:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:19.335 12:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.595 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:19.595 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.595 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.595 [2024-12-05 12:53:02.084556] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:19.595 [2024-12-05 12:53:02.084590] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:19.595 [2024-12-05 12:53:02.084652] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:19.596 [2024-12-05 12:53:02.084712] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:19.596 [2024-12-05 12:53:02.084721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:19.596 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.596 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:19.596 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.596 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.596 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.596 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.596 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:19.596 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:19.596 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:21:19.596 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:21:19.596 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:21:19.596 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.596 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.596 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.596 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:19.596 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.596 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.596 [2024-12-05 12:53:02.132586] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:19.596 [2024-12-05 12:53:02.132635] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:19.596 [2024-12-05 12:53:02.132652] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:19.596 [2024-12-05 12:53:02.132661] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:19.596 [2024-12-05 12:53:02.134856] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:19.596 [2024-12-05 12:53:02.134884] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:19.596 [2024-12-05 12:53:02.134955] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:19.596 [2024-12-05 12:53:02.134994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:19.596 [2024-12-05 12:53:02.135108] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:19.596 [2024-12-05 12:53:02.135124] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:19.596 [2024-12-05 12:53:02.135139] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:21:19.596 [2024-12-05 12:53:02.135187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:19.596 pt1 00:21:19.596 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.596 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:21:19.596 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:19.596 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:19.596 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:19.596 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:19.596 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:19.596 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:19.596 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:19.596 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:19.596 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:19.596 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:19.596 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.596 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.596 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.596 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.596 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.596 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:19.596 "name": "raid_bdev1", 00:21:19.596 "uuid": "595e0e40-51bc-4c21-b521-f83729645579", 00:21:19.596 "strip_size_kb": 0, 00:21:19.596 "state": "configuring", 00:21:19.596 "raid_level": "raid1", 00:21:19.596 "superblock": true, 00:21:19.596 "num_base_bdevs": 3, 00:21:19.596 "num_base_bdevs_discovered": 1, 00:21:19.596 "num_base_bdevs_operational": 2, 00:21:19.596 "base_bdevs_list": [ 00:21:19.596 { 00:21:19.596 "name": null, 00:21:19.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.596 "is_configured": false, 00:21:19.596 "data_offset": 2048, 00:21:19.596 "data_size": 63488 00:21:19.596 }, 00:21:19.596 { 00:21:19.596 "name": "pt2", 00:21:19.596 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:19.596 "is_configured": true, 00:21:19.596 "data_offset": 2048, 00:21:19.596 "data_size": 63488 00:21:19.596 }, 00:21:19.596 { 00:21:19.596 "name": null, 00:21:19.596 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:19.596 "is_configured": false, 00:21:19.596 "data_offset": 2048, 00:21:19.596 "data_size": 63488 00:21:19.596 } 00:21:19.596 ] 00:21:19.596 }' 00:21:19.596 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:19.596 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.856 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:21:19.856 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.856 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.856 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:20.114 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.114 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:21:20.114 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:20.114 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.114 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.114 [2024-12-05 12:53:02.464679] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:20.114 [2024-12-05 12:53:02.464737] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:20.114 [2024-12-05 12:53:02.464756] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:21:20.114 [2024-12-05 12:53:02.464765] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:20.114 [2024-12-05 12:53:02.465185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:20.114 [2024-12-05 12:53:02.465207] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:20.114 [2024-12-05 12:53:02.465276] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:20.114 [2024-12-05 12:53:02.465295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:20.114 [2024-12-05 12:53:02.465403] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:20.114 [2024-12-05 12:53:02.465417] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:20.114 [2024-12-05 12:53:02.465659] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:21:20.114 [2024-12-05 12:53:02.465799] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:20.114 [2024-12-05 12:53:02.465811] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:20.114 [2024-12-05 12:53:02.465932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:20.114 pt3 00:21:20.114 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.114 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:20.114 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:20.114 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:20.114 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:20.114 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:20.114 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:20.114 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:20.114 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:20.114 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:20.114 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:20.114 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.114 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.114 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.114 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.114 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.114 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:20.114 "name": "raid_bdev1", 00:21:20.114 "uuid": "595e0e40-51bc-4c21-b521-f83729645579", 00:21:20.114 "strip_size_kb": 0, 00:21:20.114 "state": "online", 00:21:20.114 "raid_level": "raid1", 00:21:20.114 "superblock": true, 00:21:20.114 "num_base_bdevs": 3, 00:21:20.114 "num_base_bdevs_discovered": 2, 00:21:20.114 "num_base_bdevs_operational": 2, 00:21:20.114 "base_bdevs_list": [ 00:21:20.114 { 00:21:20.114 "name": null, 00:21:20.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.114 "is_configured": false, 00:21:20.114 "data_offset": 2048, 00:21:20.114 "data_size": 63488 00:21:20.114 }, 00:21:20.114 { 00:21:20.114 "name": "pt2", 00:21:20.114 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:20.114 "is_configured": true, 00:21:20.114 "data_offset": 2048, 00:21:20.114 "data_size": 63488 00:21:20.114 }, 00:21:20.114 { 00:21:20.114 "name": "pt3", 00:21:20.114 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:20.114 "is_configured": true, 00:21:20.114 "data_offset": 2048, 00:21:20.114 "data_size": 63488 00:21:20.114 } 00:21:20.114 ] 00:21:20.114 }' 00:21:20.114 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:20.114 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.373 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:20.373 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:21:20.373 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.373 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.373 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.373 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:20.373 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:20.373 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:20.373 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.373 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.373 [2024-12-05 12:53:02.837057] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:20.373 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.373 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 595e0e40-51bc-4c21-b521-f83729645579 '!=' 595e0e40-51bc-4c21-b521-f83729645579 ']' 00:21:20.373 12:53:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66818 00:21:20.373 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66818 ']' 00:21:20.373 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66818 00:21:20.373 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:21:20.373 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:20.373 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66818 00:21:20.373 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:20.373 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:20.373 killing process with pid 66818 00:21:20.373 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66818' 00:21:20.373 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66818 00:21:20.373 [2024-12-05 12:53:02.890672] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:20.373 12:53:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66818 00:21:20.373 [2024-12-05 12:53:02.890761] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:20.373 [2024-12-05 12:53:02.890821] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:20.373 [2024-12-05 12:53:02.890833] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:21:20.632 [2024-12-05 12:53:03.075655] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:21.570 12:53:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:21:21.570 00:21:21.570 real 0m5.577s 00:21:21.570 user 0m8.684s 00:21:21.570 sys 0m0.908s 00:21:21.570 12:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:21.570 ************************************ 00:21:21.570 END TEST raid_superblock_test 00:21:21.570 ************************************ 00:21:21.570 12:53:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.570 12:53:03 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:21:21.570 12:53:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:21.570 12:53:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:21.570 12:53:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:21.570 ************************************ 00:21:21.570 START TEST raid_read_error_test 00:21:21.570 ************************************ 00:21:21.570 12:53:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:21:21.570 12:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:21:21.570 12:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:21:21.570 12:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:21:21.570 12:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:21:21.570 12:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:21.570 12:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:21:21.570 12:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:21.570 12:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:21.570 12:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:21:21.570 12:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:21.570 12:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:21.571 12:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:21:21.571 12:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:21.571 12:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:21.571 12:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:21.571 12:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:21:21.571 12:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:21:21.571 12:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:21:21.571 12:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:21:21.571 12:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:21:21.571 12:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:21:21.571 12:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:21:21.571 12:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:21:21.571 12:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:21:21.571 12:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wD2dpxt7vO 00:21:21.571 12:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67236 00:21:21.571 12:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67236 00:21:21.571 12:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:21.571 12:53:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67236 ']' 00:21:21.571 12:53:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.571 12:53:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:21.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.571 12:53:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.571 12:53:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:21.571 12:53:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.571 [2024-12-05 12:53:03.906268] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:21:21.571 [2024-12-05 12:53:03.906415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67236 ] 00:21:21.571 [2024-12-05 12:53:04.058441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.831 [2024-12-05 12:53:04.161375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.831 [2024-12-05 12:53:04.296155] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:21.831 [2024-12-05 12:53:04.296211] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.401 BaseBdev1_malloc 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.401 true 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.401 [2024-12-05 12:53:04.805572] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:22.401 [2024-12-05 12:53:04.805626] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:22.401 [2024-12-05 12:53:04.805645] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:21:22.401 [2024-12-05 12:53:04.805656] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:22.401 [2024-12-05 12:53:04.807782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:22.401 [2024-12-05 12:53:04.807815] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:22.401 BaseBdev1 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.401 BaseBdev2_malloc 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.401 true 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.401 [2024-12-05 12:53:04.849597] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:22.401 [2024-12-05 12:53:04.849647] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:22.401 [2024-12-05 12:53:04.849663] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:22.401 [2024-12-05 12:53:04.849673] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:22.401 [2024-12-05 12:53:04.851786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:22.401 [2024-12-05 12:53:04.851819] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:22.401 BaseBdev2 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.401 BaseBdev3_malloc 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.401 true 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.401 [2024-12-05 12:53:04.904418] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:22.401 [2024-12-05 12:53:04.904473] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:22.401 [2024-12-05 12:53:04.904502] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:22.401 [2024-12-05 12:53:04.904514] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:22.401 [2024-12-05 12:53:04.906685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:22.401 [2024-12-05 12:53:04.906718] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:22.401 BaseBdev3 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.401 [2024-12-05 12:53:04.912499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:22.401 [2024-12-05 12:53:04.914317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:22.401 [2024-12-05 12:53:04.914396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:22.401 [2024-12-05 12:53:04.914613] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:22.401 [2024-12-05 12:53:04.914631] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:22.401 [2024-12-05 12:53:04.914900] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:21:22.401 [2024-12-05 12:53:04.915063] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:22.401 [2024-12-05 12:53:04.915080] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:22.401 [2024-12-05 12:53:04.915236] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.401 12:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.402 12:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.402 12:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:22.402 "name": "raid_bdev1", 00:21:22.402 "uuid": "e952573d-bda1-4e6a-a6c0-5186930be98f", 00:21:22.402 "strip_size_kb": 0, 00:21:22.402 "state": "online", 00:21:22.402 "raid_level": "raid1", 00:21:22.402 "superblock": true, 00:21:22.402 "num_base_bdevs": 3, 00:21:22.402 "num_base_bdevs_discovered": 3, 00:21:22.402 "num_base_bdevs_operational": 3, 00:21:22.402 "base_bdevs_list": [ 00:21:22.402 { 00:21:22.402 "name": "BaseBdev1", 00:21:22.402 "uuid": "eee2b822-821a-5491-963c-fe22776099af", 00:21:22.402 "is_configured": true, 00:21:22.402 "data_offset": 2048, 00:21:22.402 "data_size": 63488 00:21:22.402 }, 00:21:22.402 { 00:21:22.402 "name": "BaseBdev2", 00:21:22.402 "uuid": "edc626d8-0571-5af7-b741-ef42404917e8", 00:21:22.402 "is_configured": true, 00:21:22.402 "data_offset": 2048, 00:21:22.402 "data_size": 63488 00:21:22.402 }, 00:21:22.402 { 00:21:22.402 "name": "BaseBdev3", 00:21:22.402 "uuid": "086b66bd-ccb2-5fa4-bf23-1c509923b361", 00:21:22.402 "is_configured": true, 00:21:22.402 "data_offset": 2048, 00:21:22.402 "data_size": 63488 00:21:22.402 } 00:21:22.402 ] 00:21:22.402 }' 00:21:22.402 12:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:22.402 12:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.659 12:53:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:21:22.659 12:53:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:21:22.919 [2024-12-05 12:53:05.321530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:21:23.861 12:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:21:23.861 12:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.861 12:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.861 12:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.861 12:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:21:23.861 12:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:21:23.861 12:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:21:23.861 12:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:21:23.861 12:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:23.861 12:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:23.861 12:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:23.861 12:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:23.861 12:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:23.861 12:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:23.862 12:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:23.862 12:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:23.862 12:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:23.862 12:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:23.862 12:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.862 12:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.862 12:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.862 12:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.862 12:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.862 12:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:23.862 "name": "raid_bdev1", 00:21:23.862 "uuid": "e952573d-bda1-4e6a-a6c0-5186930be98f", 00:21:23.862 "strip_size_kb": 0, 00:21:23.862 "state": "online", 00:21:23.862 "raid_level": "raid1", 00:21:23.862 "superblock": true, 00:21:23.862 "num_base_bdevs": 3, 00:21:23.862 "num_base_bdevs_discovered": 3, 00:21:23.862 "num_base_bdevs_operational": 3, 00:21:23.862 "base_bdevs_list": [ 00:21:23.862 { 00:21:23.862 "name": "BaseBdev1", 00:21:23.862 "uuid": "eee2b822-821a-5491-963c-fe22776099af", 00:21:23.862 "is_configured": true, 00:21:23.862 "data_offset": 2048, 00:21:23.862 "data_size": 63488 00:21:23.862 }, 00:21:23.862 { 00:21:23.862 "name": "BaseBdev2", 00:21:23.862 "uuid": "edc626d8-0571-5af7-b741-ef42404917e8", 00:21:23.862 "is_configured": true, 00:21:23.862 "data_offset": 2048, 00:21:23.862 "data_size": 63488 00:21:23.862 }, 00:21:23.862 { 00:21:23.862 "name": "BaseBdev3", 00:21:23.862 "uuid": "086b66bd-ccb2-5fa4-bf23-1c509923b361", 00:21:23.862 "is_configured": true, 00:21:23.862 "data_offset": 2048, 00:21:23.862 "data_size": 63488 00:21:23.862 } 00:21:23.862 ] 00:21:23.862 }' 00:21:23.862 12:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:23.862 12:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.125 12:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:24.125 12:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.125 12:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.125 [2024-12-05 12:53:06.564645] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:24.125 [2024-12-05 12:53:06.564679] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:24.125 [2024-12-05 12:53:06.567782] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:24.126 [2024-12-05 12:53:06.567832] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:24.126 [2024-12-05 12:53:06.567941] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:24.126 [2024-12-05 12:53:06.567951] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:24.126 12:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.126 { 00:21:24.126 "results": [ 00:21:24.126 { 00:21:24.126 "job": "raid_bdev1", 00:21:24.126 "core_mask": "0x1", 00:21:24.126 "workload": "randrw", 00:21:24.126 "percentage": 50, 00:21:24.126 "status": "finished", 00:21:24.126 "queue_depth": 1, 00:21:24.126 "io_size": 131072, 00:21:24.126 "runtime": 1.241366, 00:21:24.126 "iops": 13904.843535266795, 00:21:24.126 "mibps": 1738.1054419083493, 00:21:24.126 "io_failed": 0, 00:21:24.126 "io_timeout": 0, 00:21:24.126 "avg_latency_us": 68.67013213424661, 00:21:24.126 "min_latency_us": 30.326153846153847, 00:21:24.126 "max_latency_us": 1726.6215384615384 00:21:24.126 } 00:21:24.126 ], 00:21:24.126 "core_count": 1 00:21:24.126 } 00:21:24.126 12:53:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67236 00:21:24.126 12:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67236 ']' 00:21:24.126 12:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67236 00:21:24.126 12:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:21:24.126 12:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:24.126 12:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67236 00:21:24.126 12:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:24.126 12:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:24.126 killing process with pid 67236 00:21:24.126 12:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67236' 00:21:24.126 12:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67236 00:21:24.126 [2024-12-05 12:53:06.593277] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:24.126 12:53:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67236 00:21:24.387 [2024-12-05 12:53:06.736657] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:24.968 12:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:21:24.968 12:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:21:24.968 12:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wD2dpxt7vO 00:21:24.968 12:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:21:24.969 12:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:21:24.969 12:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:24.969 12:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:21:24.969 12:53:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:21:24.969 00:21:24.969 real 0m3.598s 00:21:24.969 user 0m4.286s 00:21:24.969 sys 0m0.399s 00:21:24.969 12:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:24.969 12:53:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.969 ************************************ 00:21:24.969 END TEST raid_read_error_test 00:21:24.969 ************************************ 00:21:24.969 12:53:07 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:21:24.969 12:53:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:24.969 12:53:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:24.969 12:53:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:24.969 ************************************ 00:21:24.969 START TEST raid_write_error_test 00:21:24.969 ************************************ 00:21:24.969 12:53:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:21:24.969 12:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:21:24.969 12:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:21:24.969 12:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:21:24.969 12:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:21:24.969 12:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:24.969 12:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:21:24.969 12:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:24.969 12:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:24.969 12:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:21:24.969 12:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:24.969 12:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:24.969 12:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:21:24.969 12:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:24.969 12:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:24.969 12:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:24.969 12:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:21:24.969 12:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:21:24.969 12:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:21:24.969 12:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:21:24.969 12:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:21:24.969 12:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:21:24.969 12:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:21:24.969 12:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:21:24.969 12:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:21:24.969 12:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.mMeOO8I36J 00:21:24.969 12:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67376 00:21:24.969 12:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67376 00:21:24.969 12:53:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67376 ']' 00:21:24.969 12:53:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.969 12:53:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:24.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.969 12:53:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.969 12:53:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:24.969 12:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:24.969 12:53:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.969 [2024-12-05 12:53:07.548271] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:21:24.969 [2024-12-05 12:53:07.548383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67376 ] 00:21:25.230 [2024-12-05 12:53:07.702580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.230 [2024-12-05 12:53:07.786388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.491 [2024-12-05 12:53:07.896057] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:25.491 [2024-12-05 12:53:07.896089] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:26.062 12:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:26.062 12:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:21:26.062 12:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:26.062 12:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:26.062 12:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.062 12:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.062 BaseBdev1_malloc 00:21:26.062 12:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.062 12:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:21:26.062 12:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.062 12:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.062 true 00:21:26.062 12:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.062 12:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:26.062 12:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.062 12:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.062 [2024-12-05 12:53:08.431224] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:26.062 [2024-12-05 12:53:08.431267] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:26.062 [2024-12-05 12:53:08.431283] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:21:26.062 [2024-12-05 12:53:08.431293] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:26.062 [2024-12-05 12:53:08.433120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:26.063 [2024-12-05 12:53:08.433150] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:26.063 BaseBdev1 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.063 BaseBdev2_malloc 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.063 true 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.063 [2024-12-05 12:53:08.474804] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:26.063 [2024-12-05 12:53:08.474845] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:26.063 [2024-12-05 12:53:08.474858] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:26.063 [2024-12-05 12:53:08.474868] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:26.063 [2024-12-05 12:53:08.476665] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:26.063 [2024-12-05 12:53:08.476694] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:26.063 BaseBdev2 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.063 BaseBdev3_malloc 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.063 true 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.063 [2024-12-05 12:53:08.532357] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:26.063 [2024-12-05 12:53:08.532401] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:26.063 [2024-12-05 12:53:08.532415] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:26.063 [2024-12-05 12:53:08.532424] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:26.063 [2024-12-05 12:53:08.534224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:26.063 [2024-12-05 12:53:08.534254] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:26.063 BaseBdev3 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.063 [2024-12-05 12:53:08.544412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:26.063 [2024-12-05 12:53:08.545976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:26.063 [2024-12-05 12:53:08.546045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:26.063 [2024-12-05 12:53:08.546261] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:26.063 [2024-12-05 12:53:08.546280] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:26.063 [2024-12-05 12:53:08.546518] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:21:26.063 [2024-12-05 12:53:08.546666] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:26.063 [2024-12-05 12:53:08.546683] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:26.063 [2024-12-05 12:53:08.546823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:26.063 "name": "raid_bdev1", 00:21:26.063 "uuid": "6403f1fb-4616-45c2-9d3b-e1811429448b", 00:21:26.063 "strip_size_kb": 0, 00:21:26.063 "state": "online", 00:21:26.063 "raid_level": "raid1", 00:21:26.063 "superblock": true, 00:21:26.063 "num_base_bdevs": 3, 00:21:26.063 "num_base_bdevs_discovered": 3, 00:21:26.063 "num_base_bdevs_operational": 3, 00:21:26.063 "base_bdevs_list": [ 00:21:26.063 { 00:21:26.063 "name": "BaseBdev1", 00:21:26.063 "uuid": "8be66fc9-c852-5044-a88a-b90c9d37e623", 00:21:26.063 "is_configured": true, 00:21:26.063 "data_offset": 2048, 00:21:26.063 "data_size": 63488 00:21:26.063 }, 00:21:26.063 { 00:21:26.063 "name": "BaseBdev2", 00:21:26.063 "uuid": "12df096f-ef0c-5a8a-bbbf-83c6906d445e", 00:21:26.063 "is_configured": true, 00:21:26.063 "data_offset": 2048, 00:21:26.063 "data_size": 63488 00:21:26.063 }, 00:21:26.063 { 00:21:26.063 "name": "BaseBdev3", 00:21:26.063 "uuid": "a6591d9e-c734-5975-ae9a-919e5df5e927", 00:21:26.063 "is_configured": true, 00:21:26.063 "data_offset": 2048, 00:21:26.063 "data_size": 63488 00:21:26.063 } 00:21:26.063 ] 00:21:26.063 }' 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:26.063 12:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.325 12:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:21:26.325 12:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:21:26.585 [2024-12-05 12:53:08.961311] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:21:27.524 12:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:21:27.524 12:53:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.524 12:53:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.524 [2024-12-05 12:53:09.877738] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:21:27.524 [2024-12-05 12:53:09.877783] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:27.524 [2024-12-05 12:53:09.877975] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:21:27.524 12:53:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.524 12:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:21:27.524 12:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:21:27.525 12:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:21:27.525 12:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:21:27.525 12:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:27.525 12:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:27.525 12:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:27.525 12:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:27.525 12:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:27.525 12:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:27.525 12:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:27.525 12:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:27.525 12:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:27.525 12:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:27.525 12:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.525 12:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.525 12:53:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.525 12:53:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.525 12:53:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.525 12:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:27.525 "name": "raid_bdev1", 00:21:27.525 "uuid": "6403f1fb-4616-45c2-9d3b-e1811429448b", 00:21:27.525 "strip_size_kb": 0, 00:21:27.525 "state": "online", 00:21:27.525 "raid_level": "raid1", 00:21:27.525 "superblock": true, 00:21:27.525 "num_base_bdevs": 3, 00:21:27.525 "num_base_bdevs_discovered": 2, 00:21:27.525 "num_base_bdevs_operational": 2, 00:21:27.525 "base_bdevs_list": [ 00:21:27.525 { 00:21:27.525 "name": null, 00:21:27.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.525 "is_configured": false, 00:21:27.525 "data_offset": 0, 00:21:27.525 "data_size": 63488 00:21:27.525 }, 00:21:27.525 { 00:21:27.525 "name": "BaseBdev2", 00:21:27.525 "uuid": "12df096f-ef0c-5a8a-bbbf-83c6906d445e", 00:21:27.525 "is_configured": true, 00:21:27.525 "data_offset": 2048, 00:21:27.525 "data_size": 63488 00:21:27.525 }, 00:21:27.525 { 00:21:27.525 "name": "BaseBdev3", 00:21:27.525 "uuid": "a6591d9e-c734-5975-ae9a-919e5df5e927", 00:21:27.525 "is_configured": true, 00:21:27.525 "data_offset": 2048, 00:21:27.525 "data_size": 63488 00:21:27.525 } 00:21:27.525 ] 00:21:27.525 }' 00:21:27.525 12:53:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:27.525 12:53:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.825 12:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:27.826 12:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.826 12:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.826 [2024-12-05 12:53:10.179317] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:27.826 [2024-12-05 12:53:10.179349] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:27.826 [2024-12-05 12:53:10.181919] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:27.826 [2024-12-05 12:53:10.181966] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:27.826 [2024-12-05 12:53:10.182037] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:27.826 [2024-12-05 12:53:10.182049] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:27.826 { 00:21:27.826 "results": [ 00:21:27.826 { 00:21:27.826 "job": "raid_bdev1", 00:21:27.826 "core_mask": "0x1", 00:21:27.826 "workload": "randrw", 00:21:27.826 "percentage": 50, 00:21:27.826 "status": "finished", 00:21:27.826 "queue_depth": 1, 00:21:27.826 "io_size": 131072, 00:21:27.826 "runtime": 1.216415, 00:21:27.826 "iops": 16574.11327548575, 00:21:27.826 "mibps": 2071.7641594357187, 00:21:27.826 "io_failed": 0, 00:21:27.826 "io_timeout": 0, 00:21:27.826 "avg_latency_us": 57.54647396153273, 00:21:27.826 "min_latency_us": 24.615384615384617, 00:21:27.826 "max_latency_us": 1537.5753846153846 00:21:27.826 } 00:21:27.826 ], 00:21:27.826 "core_count": 1 00:21:27.826 } 00:21:27.826 12:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.826 12:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67376 00:21:27.826 12:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67376 ']' 00:21:27.826 12:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67376 00:21:27.826 12:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:21:27.826 12:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:27.826 12:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67376 00:21:27.826 12:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:27.826 12:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:27.826 killing process with pid 67376 00:21:27.826 12:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67376' 00:21:27.826 12:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67376 00:21:27.826 12:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67376 00:21:27.826 [2024-12-05 12:53:10.208665] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:27.826 [2024-12-05 12:53:10.322450] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:28.396 12:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.mMeOO8I36J 00:21:28.396 12:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:21:28.396 12:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:21:28.396 12:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:21:28.396 12:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:21:28.396 12:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:28.396 12:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:21:28.396 12:53:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:21:28.396 00:21:28.396 real 0m3.455s 00:21:28.396 user 0m4.119s 00:21:28.396 sys 0m0.381s 00:21:28.396 12:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:28.396 12:53:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.396 ************************************ 00:21:28.396 END TEST raid_write_error_test 00:21:28.396 ************************************ 00:21:28.396 12:53:10 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:21:28.396 12:53:10 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:21:28.396 12:53:10 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:21:28.396 12:53:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:28.396 12:53:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:28.396 12:53:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:28.396 ************************************ 00:21:28.396 START TEST raid_state_function_test 00:21:28.396 ************************************ 00:21:28.396 12:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:21:28.396 12:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:21:28.396 12:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:21:28.396 12:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:21:28.396 12:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:28.655 12:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:28.655 12:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:28.655 12:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:28.655 12:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:28.655 12:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:28.655 12:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:28.655 12:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:28.655 12:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:28.655 12:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:21:28.655 12:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:28.655 12:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:28.655 12:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:21:28.655 12:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:28.655 12:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:28.655 12:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:28.655 12:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:28.655 12:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:28.655 12:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:28.655 12:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:28.655 12:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:28.655 12:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:21:28.655 12:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:21:28.655 12:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:21:28.655 12:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:21:28.655 12:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:21:28.655 12:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67503 00:21:28.655 Process raid pid: 67503 00:21:28.655 12:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67503' 00:21:28.655 12:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67503 00:21:28.655 12:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67503 ']' 00:21:28.655 12:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.655 12:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:28.655 12:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:28.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.655 12:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.655 12:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:28.655 12:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.655 [2024-12-05 12:53:11.046862] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:21:28.655 [2024-12-05 12:53:11.046978] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:28.655 [2024-12-05 12:53:11.211151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.915 [2024-12-05 12:53:11.315185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.915 [2024-12-05 12:53:11.455414] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:28.915 [2024-12-05 12:53:11.455455] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:29.488 12:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:29.488 12:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:21:29.488 12:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:29.488 12:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.488 12:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.488 [2024-12-05 12:53:11.868478] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:29.488 [2024-12-05 12:53:11.868537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:29.488 [2024-12-05 12:53:11.868547] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:29.488 [2024-12-05 12:53:11.868556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:29.488 [2024-12-05 12:53:11.868562] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:29.488 [2024-12-05 12:53:11.868571] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:29.488 [2024-12-05 12:53:11.868577] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:29.488 [2024-12-05 12:53:11.868585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:29.488 12:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.488 12:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:29.488 12:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:29.488 12:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:29.488 12:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:29.488 12:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:29.488 12:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:29.488 12:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:29.488 12:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:29.488 12:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:29.488 12:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:29.488 12:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:29.488 12:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.488 12:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.488 12:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.488 12:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.488 12:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:29.488 "name": "Existed_Raid", 00:21:29.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.488 "strip_size_kb": 64, 00:21:29.488 "state": "configuring", 00:21:29.488 "raid_level": "raid0", 00:21:29.488 "superblock": false, 00:21:29.488 "num_base_bdevs": 4, 00:21:29.488 "num_base_bdevs_discovered": 0, 00:21:29.488 "num_base_bdevs_operational": 4, 00:21:29.488 "base_bdevs_list": [ 00:21:29.488 { 00:21:29.488 "name": "BaseBdev1", 00:21:29.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.488 "is_configured": false, 00:21:29.488 "data_offset": 0, 00:21:29.488 "data_size": 0 00:21:29.488 }, 00:21:29.488 { 00:21:29.488 "name": "BaseBdev2", 00:21:29.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.488 "is_configured": false, 00:21:29.488 "data_offset": 0, 00:21:29.488 "data_size": 0 00:21:29.488 }, 00:21:29.488 { 00:21:29.488 "name": "BaseBdev3", 00:21:29.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.488 "is_configured": false, 00:21:29.488 "data_offset": 0, 00:21:29.488 "data_size": 0 00:21:29.488 }, 00:21:29.488 { 00:21:29.488 "name": "BaseBdev4", 00:21:29.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.488 "is_configured": false, 00:21:29.488 "data_offset": 0, 00:21:29.488 "data_size": 0 00:21:29.488 } 00:21:29.488 ] 00:21:29.488 }' 00:21:29.488 12:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:29.488 12:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.750 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:29.750 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.750 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.750 [2024-12-05 12:53:12.192506] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:29.750 [2024-12-05 12:53:12.192541] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:29.750 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.750 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:29.750 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.750 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.750 [2024-12-05 12:53:12.200522] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:29.750 [2024-12-05 12:53:12.200555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:29.750 [2024-12-05 12:53:12.200564] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:29.750 [2024-12-05 12:53:12.200573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:29.750 [2024-12-05 12:53:12.200580] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:29.750 [2024-12-05 12:53:12.200590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:29.750 [2024-12-05 12:53:12.200596] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:29.750 [2024-12-05 12:53:12.200606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:29.750 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.750 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:29.750 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.750 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.750 [2024-12-05 12:53:12.233001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:29.750 BaseBdev1 00:21:29.750 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.750 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:29.750 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:29.750 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:29.750 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:29.750 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:29.750 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:29.750 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:29.750 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.750 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.750 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.750 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:29.750 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.750 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.750 [ 00:21:29.750 { 00:21:29.750 "name": "BaseBdev1", 00:21:29.750 "aliases": [ 00:21:29.750 "9128438b-2277-425e-80bf-3ebdace52718" 00:21:29.750 ], 00:21:29.750 "product_name": "Malloc disk", 00:21:29.750 "block_size": 512, 00:21:29.750 "num_blocks": 65536, 00:21:29.750 "uuid": "9128438b-2277-425e-80bf-3ebdace52718", 00:21:29.750 "assigned_rate_limits": { 00:21:29.750 "rw_ios_per_sec": 0, 00:21:29.750 "rw_mbytes_per_sec": 0, 00:21:29.750 "r_mbytes_per_sec": 0, 00:21:29.750 "w_mbytes_per_sec": 0 00:21:29.750 }, 00:21:29.750 "claimed": true, 00:21:29.750 "claim_type": "exclusive_write", 00:21:29.750 "zoned": false, 00:21:29.750 "supported_io_types": { 00:21:29.750 "read": true, 00:21:29.750 "write": true, 00:21:29.750 "unmap": true, 00:21:29.750 "flush": true, 00:21:29.750 "reset": true, 00:21:29.750 "nvme_admin": false, 00:21:29.750 "nvme_io": false, 00:21:29.750 "nvme_io_md": false, 00:21:29.750 "write_zeroes": true, 00:21:29.750 "zcopy": true, 00:21:29.750 "get_zone_info": false, 00:21:29.750 "zone_management": false, 00:21:29.750 "zone_append": false, 00:21:29.750 "compare": false, 00:21:29.750 "compare_and_write": false, 00:21:29.750 "abort": true, 00:21:29.750 "seek_hole": false, 00:21:29.750 "seek_data": false, 00:21:29.750 "copy": true, 00:21:29.750 "nvme_iov_md": false 00:21:29.750 }, 00:21:29.750 "memory_domains": [ 00:21:29.750 { 00:21:29.750 "dma_device_id": "system", 00:21:29.750 "dma_device_type": 1 00:21:29.750 }, 00:21:29.750 { 00:21:29.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:29.750 "dma_device_type": 2 00:21:29.750 } 00:21:29.750 ], 00:21:29.750 "driver_specific": {} 00:21:29.750 } 00:21:29.750 ] 00:21:29.750 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.750 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:29.750 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:29.750 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:29.750 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:29.750 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:29.750 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:29.750 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:29.750 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:29.750 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:29.751 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:29.751 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:29.751 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.751 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:29.751 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.751 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.751 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.751 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:29.751 "name": "Existed_Raid", 00:21:29.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.751 "strip_size_kb": 64, 00:21:29.751 "state": "configuring", 00:21:29.751 "raid_level": "raid0", 00:21:29.751 "superblock": false, 00:21:29.751 "num_base_bdevs": 4, 00:21:29.751 "num_base_bdevs_discovered": 1, 00:21:29.751 "num_base_bdevs_operational": 4, 00:21:29.751 "base_bdevs_list": [ 00:21:29.751 { 00:21:29.751 "name": "BaseBdev1", 00:21:29.751 "uuid": "9128438b-2277-425e-80bf-3ebdace52718", 00:21:29.751 "is_configured": true, 00:21:29.751 "data_offset": 0, 00:21:29.751 "data_size": 65536 00:21:29.751 }, 00:21:29.751 { 00:21:29.751 "name": "BaseBdev2", 00:21:29.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.751 "is_configured": false, 00:21:29.751 "data_offset": 0, 00:21:29.751 "data_size": 0 00:21:29.751 }, 00:21:29.751 { 00:21:29.751 "name": "BaseBdev3", 00:21:29.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.751 "is_configured": false, 00:21:29.751 "data_offset": 0, 00:21:29.751 "data_size": 0 00:21:29.751 }, 00:21:29.751 { 00:21:29.751 "name": "BaseBdev4", 00:21:29.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.751 "is_configured": false, 00:21:29.751 "data_offset": 0, 00:21:29.751 "data_size": 0 00:21:29.751 } 00:21:29.751 ] 00:21:29.751 }' 00:21:29.751 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:29.751 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.321 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:30.321 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.321 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.321 [2024-12-05 12:53:12.609146] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:30.321 [2024-12-05 12:53:12.609192] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:30.321 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.321 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:30.321 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.321 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.321 [2024-12-05 12:53:12.617238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:30.321 [2024-12-05 12:53:12.619530] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:30.321 [2024-12-05 12:53:12.619685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:30.321 [2024-12-05 12:53:12.619765] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:30.321 [2024-12-05 12:53:12.619796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:30.321 [2024-12-05 12:53:12.619884] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:30.321 [2024-12-05 12:53:12.619910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:30.321 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.321 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:30.321 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:30.321 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:30.321 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:30.321 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:30.321 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:30.321 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:30.321 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:30.321 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:30.321 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:30.321 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:30.321 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:30.321 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.321 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:30.321 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.321 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.321 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.321 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:30.321 "name": "Existed_Raid", 00:21:30.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.321 "strip_size_kb": 64, 00:21:30.321 "state": "configuring", 00:21:30.321 "raid_level": "raid0", 00:21:30.321 "superblock": false, 00:21:30.321 "num_base_bdevs": 4, 00:21:30.321 "num_base_bdevs_discovered": 1, 00:21:30.321 "num_base_bdevs_operational": 4, 00:21:30.321 "base_bdevs_list": [ 00:21:30.321 { 00:21:30.321 "name": "BaseBdev1", 00:21:30.321 "uuid": "9128438b-2277-425e-80bf-3ebdace52718", 00:21:30.321 "is_configured": true, 00:21:30.321 "data_offset": 0, 00:21:30.321 "data_size": 65536 00:21:30.321 }, 00:21:30.321 { 00:21:30.321 "name": "BaseBdev2", 00:21:30.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.321 "is_configured": false, 00:21:30.321 "data_offset": 0, 00:21:30.321 "data_size": 0 00:21:30.321 }, 00:21:30.321 { 00:21:30.321 "name": "BaseBdev3", 00:21:30.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.321 "is_configured": false, 00:21:30.321 "data_offset": 0, 00:21:30.321 "data_size": 0 00:21:30.321 }, 00:21:30.321 { 00:21:30.321 "name": "BaseBdev4", 00:21:30.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.321 "is_configured": false, 00:21:30.321 "data_offset": 0, 00:21:30.321 "data_size": 0 00:21:30.321 } 00:21:30.321 ] 00:21:30.321 }' 00:21:30.321 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:30.321 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.582 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:30.582 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.582 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.582 [2024-12-05 12:53:12.977115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:30.582 BaseBdev2 00:21:30.582 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.582 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:30.582 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:30.582 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:30.582 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:30.582 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:30.582 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:30.582 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:30.582 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.582 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.582 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.582 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:30.582 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.582 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.582 [ 00:21:30.582 { 00:21:30.582 "name": "BaseBdev2", 00:21:30.582 "aliases": [ 00:21:30.582 "befdc4e9-5000-49f8-bf5b-61a80de5311f" 00:21:30.582 ], 00:21:30.582 "product_name": "Malloc disk", 00:21:30.582 "block_size": 512, 00:21:30.582 "num_blocks": 65536, 00:21:30.582 "uuid": "befdc4e9-5000-49f8-bf5b-61a80de5311f", 00:21:30.582 "assigned_rate_limits": { 00:21:30.582 "rw_ios_per_sec": 0, 00:21:30.582 "rw_mbytes_per_sec": 0, 00:21:30.582 "r_mbytes_per_sec": 0, 00:21:30.582 "w_mbytes_per_sec": 0 00:21:30.582 }, 00:21:30.582 "claimed": true, 00:21:30.582 "claim_type": "exclusive_write", 00:21:30.582 "zoned": false, 00:21:30.582 "supported_io_types": { 00:21:30.582 "read": true, 00:21:30.582 "write": true, 00:21:30.582 "unmap": true, 00:21:30.582 "flush": true, 00:21:30.582 "reset": true, 00:21:30.582 "nvme_admin": false, 00:21:30.582 "nvme_io": false, 00:21:30.582 "nvme_io_md": false, 00:21:30.582 "write_zeroes": true, 00:21:30.582 "zcopy": true, 00:21:30.582 "get_zone_info": false, 00:21:30.582 "zone_management": false, 00:21:30.582 "zone_append": false, 00:21:30.582 "compare": false, 00:21:30.582 "compare_and_write": false, 00:21:30.582 "abort": true, 00:21:30.582 "seek_hole": false, 00:21:30.582 "seek_data": false, 00:21:30.582 "copy": true, 00:21:30.582 "nvme_iov_md": false 00:21:30.582 }, 00:21:30.582 "memory_domains": [ 00:21:30.582 { 00:21:30.582 "dma_device_id": "system", 00:21:30.582 "dma_device_type": 1 00:21:30.582 }, 00:21:30.582 { 00:21:30.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:30.582 "dma_device_type": 2 00:21:30.582 } 00:21:30.582 ], 00:21:30.582 "driver_specific": {} 00:21:30.582 } 00:21:30.582 ] 00:21:30.582 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.582 12:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:30.582 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:30.582 12:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:30.582 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:30.582 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:30.582 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:30.582 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:30.582 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:30.582 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:30.582 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:30.582 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:30.582 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:30.582 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:30.582 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.582 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:30.582 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.582 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.582 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.582 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:30.582 "name": "Existed_Raid", 00:21:30.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.582 "strip_size_kb": 64, 00:21:30.582 "state": "configuring", 00:21:30.582 "raid_level": "raid0", 00:21:30.582 "superblock": false, 00:21:30.582 "num_base_bdevs": 4, 00:21:30.582 "num_base_bdevs_discovered": 2, 00:21:30.582 "num_base_bdevs_operational": 4, 00:21:30.582 "base_bdevs_list": [ 00:21:30.582 { 00:21:30.582 "name": "BaseBdev1", 00:21:30.582 "uuid": "9128438b-2277-425e-80bf-3ebdace52718", 00:21:30.582 "is_configured": true, 00:21:30.582 "data_offset": 0, 00:21:30.582 "data_size": 65536 00:21:30.582 }, 00:21:30.582 { 00:21:30.582 "name": "BaseBdev2", 00:21:30.582 "uuid": "befdc4e9-5000-49f8-bf5b-61a80de5311f", 00:21:30.582 "is_configured": true, 00:21:30.582 "data_offset": 0, 00:21:30.582 "data_size": 65536 00:21:30.582 }, 00:21:30.582 { 00:21:30.582 "name": "BaseBdev3", 00:21:30.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.582 "is_configured": false, 00:21:30.582 "data_offset": 0, 00:21:30.582 "data_size": 0 00:21:30.582 }, 00:21:30.582 { 00:21:30.582 "name": "BaseBdev4", 00:21:30.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.582 "is_configured": false, 00:21:30.582 "data_offset": 0, 00:21:30.582 "data_size": 0 00:21:30.582 } 00:21:30.582 ] 00:21:30.582 }' 00:21:30.582 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:30.582 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.842 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:30.842 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.842 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.842 [2024-12-05 12:53:13.331867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:30.842 BaseBdev3 00:21:30.842 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.842 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:21:30.842 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:30.842 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:30.842 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:30.842 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:30.842 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:30.842 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:30.842 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.842 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.842 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.842 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:30.842 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.842 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.842 [ 00:21:30.842 { 00:21:30.842 "name": "BaseBdev3", 00:21:30.842 "aliases": [ 00:21:30.842 "8b237bf6-5cb9-42d8-b54d-0863be87df3f" 00:21:30.842 ], 00:21:30.842 "product_name": "Malloc disk", 00:21:30.842 "block_size": 512, 00:21:30.842 "num_blocks": 65536, 00:21:30.842 "uuid": "8b237bf6-5cb9-42d8-b54d-0863be87df3f", 00:21:30.842 "assigned_rate_limits": { 00:21:30.842 "rw_ios_per_sec": 0, 00:21:30.842 "rw_mbytes_per_sec": 0, 00:21:30.843 "r_mbytes_per_sec": 0, 00:21:30.843 "w_mbytes_per_sec": 0 00:21:30.843 }, 00:21:30.843 "claimed": true, 00:21:30.843 "claim_type": "exclusive_write", 00:21:30.843 "zoned": false, 00:21:30.843 "supported_io_types": { 00:21:30.843 "read": true, 00:21:30.843 "write": true, 00:21:30.843 "unmap": true, 00:21:30.843 "flush": true, 00:21:30.843 "reset": true, 00:21:30.843 "nvme_admin": false, 00:21:30.843 "nvme_io": false, 00:21:30.843 "nvme_io_md": false, 00:21:30.843 "write_zeroes": true, 00:21:30.843 "zcopy": true, 00:21:30.843 "get_zone_info": false, 00:21:30.843 "zone_management": false, 00:21:30.843 "zone_append": false, 00:21:30.843 "compare": false, 00:21:30.843 "compare_and_write": false, 00:21:30.843 "abort": true, 00:21:30.843 "seek_hole": false, 00:21:30.843 "seek_data": false, 00:21:30.843 "copy": true, 00:21:30.843 "nvme_iov_md": false 00:21:30.843 }, 00:21:30.843 "memory_domains": [ 00:21:30.843 { 00:21:30.843 "dma_device_id": "system", 00:21:30.843 "dma_device_type": 1 00:21:30.843 }, 00:21:30.843 { 00:21:30.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:30.843 "dma_device_type": 2 00:21:30.843 } 00:21:30.843 ], 00:21:30.843 "driver_specific": {} 00:21:30.843 } 00:21:30.843 ] 00:21:30.843 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.843 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:30.843 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:30.843 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:30.843 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:30.843 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:30.843 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:30.843 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:30.843 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:30.843 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:30.843 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:30.843 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:30.843 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:30.843 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:30.843 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.843 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.843 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.843 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:30.843 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.843 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:30.843 "name": "Existed_Raid", 00:21:30.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.843 "strip_size_kb": 64, 00:21:30.843 "state": "configuring", 00:21:30.843 "raid_level": "raid0", 00:21:30.843 "superblock": false, 00:21:30.843 "num_base_bdevs": 4, 00:21:30.843 "num_base_bdevs_discovered": 3, 00:21:30.843 "num_base_bdevs_operational": 4, 00:21:30.843 "base_bdevs_list": [ 00:21:30.843 { 00:21:30.843 "name": "BaseBdev1", 00:21:30.843 "uuid": "9128438b-2277-425e-80bf-3ebdace52718", 00:21:30.843 "is_configured": true, 00:21:30.843 "data_offset": 0, 00:21:30.843 "data_size": 65536 00:21:30.843 }, 00:21:30.843 { 00:21:30.843 "name": "BaseBdev2", 00:21:30.843 "uuid": "befdc4e9-5000-49f8-bf5b-61a80de5311f", 00:21:30.843 "is_configured": true, 00:21:30.843 "data_offset": 0, 00:21:30.843 "data_size": 65536 00:21:30.843 }, 00:21:30.843 { 00:21:30.843 "name": "BaseBdev3", 00:21:30.843 "uuid": "8b237bf6-5cb9-42d8-b54d-0863be87df3f", 00:21:30.843 "is_configured": true, 00:21:30.843 "data_offset": 0, 00:21:30.843 "data_size": 65536 00:21:30.843 }, 00:21:30.843 { 00:21:30.843 "name": "BaseBdev4", 00:21:30.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.843 "is_configured": false, 00:21:30.843 "data_offset": 0, 00:21:30.843 "data_size": 0 00:21:30.843 } 00:21:30.843 ] 00:21:30.843 }' 00:21:30.843 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:30.843 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.102 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:31.102 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.102 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.363 [2024-12-05 12:53:13.706870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:31.363 [2024-12-05 12:53:13.707099] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:31.363 [2024-12-05 12:53:13.707116] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:21:31.363 [2024-12-05 12:53:13.707398] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:31.363 [2024-12-05 12:53:13.707574] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:31.363 [2024-12-05 12:53:13.707586] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:31.363 [2024-12-05 12:53:13.707830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:31.363 BaseBdev4 00:21:31.363 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.363 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:21:31.363 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:21:31.363 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:31.363 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:31.363 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:31.363 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:31.363 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:31.363 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.363 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.363 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.363 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:31.363 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.363 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.363 [ 00:21:31.363 { 00:21:31.363 "name": "BaseBdev4", 00:21:31.363 "aliases": [ 00:21:31.363 "dedca2e5-4c2a-48e5-a822-9483d9b87072" 00:21:31.363 ], 00:21:31.363 "product_name": "Malloc disk", 00:21:31.363 "block_size": 512, 00:21:31.363 "num_blocks": 65536, 00:21:31.363 "uuid": "dedca2e5-4c2a-48e5-a822-9483d9b87072", 00:21:31.363 "assigned_rate_limits": { 00:21:31.363 "rw_ios_per_sec": 0, 00:21:31.363 "rw_mbytes_per_sec": 0, 00:21:31.363 "r_mbytes_per_sec": 0, 00:21:31.363 "w_mbytes_per_sec": 0 00:21:31.363 }, 00:21:31.363 "claimed": true, 00:21:31.363 "claim_type": "exclusive_write", 00:21:31.363 "zoned": false, 00:21:31.363 "supported_io_types": { 00:21:31.363 "read": true, 00:21:31.363 "write": true, 00:21:31.363 "unmap": true, 00:21:31.363 "flush": true, 00:21:31.363 "reset": true, 00:21:31.363 "nvme_admin": false, 00:21:31.363 "nvme_io": false, 00:21:31.363 "nvme_io_md": false, 00:21:31.363 "write_zeroes": true, 00:21:31.363 "zcopy": true, 00:21:31.363 "get_zone_info": false, 00:21:31.363 "zone_management": false, 00:21:31.363 "zone_append": false, 00:21:31.363 "compare": false, 00:21:31.363 "compare_and_write": false, 00:21:31.363 "abort": true, 00:21:31.363 "seek_hole": false, 00:21:31.363 "seek_data": false, 00:21:31.363 "copy": true, 00:21:31.363 "nvme_iov_md": false 00:21:31.363 }, 00:21:31.363 "memory_domains": [ 00:21:31.363 { 00:21:31.363 "dma_device_id": "system", 00:21:31.363 "dma_device_type": 1 00:21:31.363 }, 00:21:31.363 { 00:21:31.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:31.363 "dma_device_type": 2 00:21:31.363 } 00:21:31.363 ], 00:21:31.363 "driver_specific": {} 00:21:31.363 } 00:21:31.363 ] 00:21:31.363 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.363 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:31.363 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:31.363 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:31.363 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:21:31.363 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:31.363 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:31.363 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:31.363 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:31.363 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:31.363 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:31.363 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:31.363 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:31.363 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:31.363 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.363 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.363 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.363 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:31.363 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.363 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:31.363 "name": "Existed_Raid", 00:21:31.363 "uuid": "a6d2ec2b-0fe5-44c4-8f4b-a18057f45761", 00:21:31.363 "strip_size_kb": 64, 00:21:31.363 "state": "online", 00:21:31.363 "raid_level": "raid0", 00:21:31.363 "superblock": false, 00:21:31.363 "num_base_bdevs": 4, 00:21:31.363 "num_base_bdevs_discovered": 4, 00:21:31.363 "num_base_bdevs_operational": 4, 00:21:31.363 "base_bdevs_list": [ 00:21:31.363 { 00:21:31.363 "name": "BaseBdev1", 00:21:31.363 "uuid": "9128438b-2277-425e-80bf-3ebdace52718", 00:21:31.363 "is_configured": true, 00:21:31.363 "data_offset": 0, 00:21:31.363 "data_size": 65536 00:21:31.363 }, 00:21:31.363 { 00:21:31.363 "name": "BaseBdev2", 00:21:31.364 "uuid": "befdc4e9-5000-49f8-bf5b-61a80de5311f", 00:21:31.364 "is_configured": true, 00:21:31.364 "data_offset": 0, 00:21:31.364 "data_size": 65536 00:21:31.364 }, 00:21:31.364 { 00:21:31.364 "name": "BaseBdev3", 00:21:31.364 "uuid": "8b237bf6-5cb9-42d8-b54d-0863be87df3f", 00:21:31.364 "is_configured": true, 00:21:31.364 "data_offset": 0, 00:21:31.364 "data_size": 65536 00:21:31.364 }, 00:21:31.364 { 00:21:31.364 "name": "BaseBdev4", 00:21:31.364 "uuid": "dedca2e5-4c2a-48e5-a822-9483d9b87072", 00:21:31.364 "is_configured": true, 00:21:31.364 "data_offset": 0, 00:21:31.364 "data_size": 65536 00:21:31.364 } 00:21:31.364 ] 00:21:31.364 }' 00:21:31.364 12:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:31.364 12:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.623 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:31.623 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:31.623 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:31.623 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:31.623 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:31.623 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:31.623 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:31.623 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:31.623 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.623 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.623 [2024-12-05 12:53:14.083389] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:31.623 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.623 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:31.623 "name": "Existed_Raid", 00:21:31.623 "aliases": [ 00:21:31.623 "a6d2ec2b-0fe5-44c4-8f4b-a18057f45761" 00:21:31.623 ], 00:21:31.623 "product_name": "Raid Volume", 00:21:31.623 "block_size": 512, 00:21:31.623 "num_blocks": 262144, 00:21:31.623 "uuid": "a6d2ec2b-0fe5-44c4-8f4b-a18057f45761", 00:21:31.623 "assigned_rate_limits": { 00:21:31.623 "rw_ios_per_sec": 0, 00:21:31.623 "rw_mbytes_per_sec": 0, 00:21:31.623 "r_mbytes_per_sec": 0, 00:21:31.623 "w_mbytes_per_sec": 0 00:21:31.623 }, 00:21:31.623 "claimed": false, 00:21:31.623 "zoned": false, 00:21:31.623 "supported_io_types": { 00:21:31.623 "read": true, 00:21:31.623 "write": true, 00:21:31.623 "unmap": true, 00:21:31.623 "flush": true, 00:21:31.623 "reset": true, 00:21:31.623 "nvme_admin": false, 00:21:31.623 "nvme_io": false, 00:21:31.623 "nvme_io_md": false, 00:21:31.623 "write_zeroes": true, 00:21:31.623 "zcopy": false, 00:21:31.623 "get_zone_info": false, 00:21:31.623 "zone_management": false, 00:21:31.623 "zone_append": false, 00:21:31.623 "compare": false, 00:21:31.623 "compare_and_write": false, 00:21:31.623 "abort": false, 00:21:31.623 "seek_hole": false, 00:21:31.623 "seek_data": false, 00:21:31.623 "copy": false, 00:21:31.623 "nvme_iov_md": false 00:21:31.623 }, 00:21:31.623 "memory_domains": [ 00:21:31.623 { 00:21:31.623 "dma_device_id": "system", 00:21:31.623 "dma_device_type": 1 00:21:31.623 }, 00:21:31.623 { 00:21:31.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:31.623 "dma_device_type": 2 00:21:31.623 }, 00:21:31.623 { 00:21:31.623 "dma_device_id": "system", 00:21:31.623 "dma_device_type": 1 00:21:31.623 }, 00:21:31.623 { 00:21:31.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:31.623 "dma_device_type": 2 00:21:31.623 }, 00:21:31.623 { 00:21:31.623 "dma_device_id": "system", 00:21:31.623 "dma_device_type": 1 00:21:31.623 }, 00:21:31.623 { 00:21:31.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:31.623 "dma_device_type": 2 00:21:31.623 }, 00:21:31.623 { 00:21:31.623 "dma_device_id": "system", 00:21:31.623 "dma_device_type": 1 00:21:31.623 }, 00:21:31.623 { 00:21:31.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:31.623 "dma_device_type": 2 00:21:31.623 } 00:21:31.623 ], 00:21:31.623 "driver_specific": { 00:21:31.623 "raid": { 00:21:31.623 "uuid": "a6d2ec2b-0fe5-44c4-8f4b-a18057f45761", 00:21:31.623 "strip_size_kb": 64, 00:21:31.623 "state": "online", 00:21:31.623 "raid_level": "raid0", 00:21:31.623 "superblock": false, 00:21:31.623 "num_base_bdevs": 4, 00:21:31.623 "num_base_bdevs_discovered": 4, 00:21:31.623 "num_base_bdevs_operational": 4, 00:21:31.623 "base_bdevs_list": [ 00:21:31.624 { 00:21:31.624 "name": "BaseBdev1", 00:21:31.624 "uuid": "9128438b-2277-425e-80bf-3ebdace52718", 00:21:31.624 "is_configured": true, 00:21:31.624 "data_offset": 0, 00:21:31.624 "data_size": 65536 00:21:31.624 }, 00:21:31.624 { 00:21:31.624 "name": "BaseBdev2", 00:21:31.624 "uuid": "befdc4e9-5000-49f8-bf5b-61a80de5311f", 00:21:31.624 "is_configured": true, 00:21:31.624 "data_offset": 0, 00:21:31.624 "data_size": 65536 00:21:31.624 }, 00:21:31.624 { 00:21:31.624 "name": "BaseBdev3", 00:21:31.624 "uuid": "8b237bf6-5cb9-42d8-b54d-0863be87df3f", 00:21:31.624 "is_configured": true, 00:21:31.624 "data_offset": 0, 00:21:31.624 "data_size": 65536 00:21:31.624 }, 00:21:31.624 { 00:21:31.624 "name": "BaseBdev4", 00:21:31.624 "uuid": "dedca2e5-4c2a-48e5-a822-9483d9b87072", 00:21:31.624 "is_configured": true, 00:21:31.624 "data_offset": 0, 00:21:31.624 "data_size": 65536 00:21:31.624 } 00:21:31.624 ] 00:21:31.624 } 00:21:31.624 } 00:21:31.624 }' 00:21:31.624 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:31.624 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:31.624 BaseBdev2 00:21:31.624 BaseBdev3 00:21:31.624 BaseBdev4' 00:21:31.624 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:31.624 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:31.624 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:31.624 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:31.624 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:31.624 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.624 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.624 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.624 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:31.624 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:31.624 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.884 [2024-12-05 12:53:14.299124] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:31.884 [2024-12-05 12:53:14.299153] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:31.884 [2024-12-05 12:53:14.299200] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:31.884 "name": "Existed_Raid", 00:21:31.884 "uuid": "a6d2ec2b-0fe5-44c4-8f4b-a18057f45761", 00:21:31.884 "strip_size_kb": 64, 00:21:31.884 "state": "offline", 00:21:31.884 "raid_level": "raid0", 00:21:31.884 "superblock": false, 00:21:31.884 "num_base_bdevs": 4, 00:21:31.884 "num_base_bdevs_discovered": 3, 00:21:31.884 "num_base_bdevs_operational": 3, 00:21:31.884 "base_bdevs_list": [ 00:21:31.884 { 00:21:31.884 "name": null, 00:21:31.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.884 "is_configured": false, 00:21:31.884 "data_offset": 0, 00:21:31.884 "data_size": 65536 00:21:31.884 }, 00:21:31.884 { 00:21:31.884 "name": "BaseBdev2", 00:21:31.884 "uuid": "befdc4e9-5000-49f8-bf5b-61a80de5311f", 00:21:31.884 "is_configured": true, 00:21:31.884 "data_offset": 0, 00:21:31.884 "data_size": 65536 00:21:31.884 }, 00:21:31.884 { 00:21:31.884 "name": "BaseBdev3", 00:21:31.884 "uuid": "8b237bf6-5cb9-42d8-b54d-0863be87df3f", 00:21:31.884 "is_configured": true, 00:21:31.884 "data_offset": 0, 00:21:31.884 "data_size": 65536 00:21:31.884 }, 00:21:31.884 { 00:21:31.884 "name": "BaseBdev4", 00:21:31.884 "uuid": "dedca2e5-4c2a-48e5-a822-9483d9b87072", 00:21:31.884 "is_configured": true, 00:21:31.884 "data_offset": 0, 00:21:31.884 "data_size": 65536 00:21:31.884 } 00:21:31.884 ] 00:21:31.884 }' 00:21:31.884 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:31.885 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.144 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:32.144 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:32.144 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.144 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:32.144 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.144 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.144 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.144 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:32.144 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:32.144 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:32.144 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.144 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.144 [2024-12-05 12:53:14.715309] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:32.404 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.404 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:32.404 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:32.404 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.404 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.404 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.404 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:32.404 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.404 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:32.404 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:32.404 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:21:32.404 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.404 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.404 [2024-12-05 12:53:14.818740] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:32.404 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.404 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:32.404 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:32.404 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:32.404 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.404 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.404 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.404 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.404 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:32.404 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:32.404 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:21:32.404 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.404 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.404 [2024-12-05 12:53:14.918916] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:32.404 [2024-12-05 12:53:14.918965] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:32.404 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.404 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:32.404 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:32.404 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.404 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.404 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.404 12:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:32.664 12:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.664 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:32.664 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:32.664 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:21:32.664 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:21:32.664 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:32.664 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:32.664 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.664 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.664 BaseBdev2 00:21:32.664 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.664 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:21:32.664 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:32.664 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:32.664 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:32.664 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.665 [ 00:21:32.665 { 00:21:32.665 "name": "BaseBdev2", 00:21:32.665 "aliases": [ 00:21:32.665 "ae54b333-3234-4239-930f-4c3989eadedf" 00:21:32.665 ], 00:21:32.665 "product_name": "Malloc disk", 00:21:32.665 "block_size": 512, 00:21:32.665 "num_blocks": 65536, 00:21:32.665 "uuid": "ae54b333-3234-4239-930f-4c3989eadedf", 00:21:32.665 "assigned_rate_limits": { 00:21:32.665 "rw_ios_per_sec": 0, 00:21:32.665 "rw_mbytes_per_sec": 0, 00:21:32.665 "r_mbytes_per_sec": 0, 00:21:32.665 "w_mbytes_per_sec": 0 00:21:32.665 }, 00:21:32.665 "claimed": false, 00:21:32.665 "zoned": false, 00:21:32.665 "supported_io_types": { 00:21:32.665 "read": true, 00:21:32.665 "write": true, 00:21:32.665 "unmap": true, 00:21:32.665 "flush": true, 00:21:32.665 "reset": true, 00:21:32.665 "nvme_admin": false, 00:21:32.665 "nvme_io": false, 00:21:32.665 "nvme_io_md": false, 00:21:32.665 "write_zeroes": true, 00:21:32.665 "zcopy": true, 00:21:32.665 "get_zone_info": false, 00:21:32.665 "zone_management": false, 00:21:32.665 "zone_append": false, 00:21:32.665 "compare": false, 00:21:32.665 "compare_and_write": false, 00:21:32.665 "abort": true, 00:21:32.665 "seek_hole": false, 00:21:32.665 "seek_data": false, 00:21:32.665 "copy": true, 00:21:32.665 "nvme_iov_md": false 00:21:32.665 }, 00:21:32.665 "memory_domains": [ 00:21:32.665 { 00:21:32.665 "dma_device_id": "system", 00:21:32.665 "dma_device_type": 1 00:21:32.665 }, 00:21:32.665 { 00:21:32.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:32.665 "dma_device_type": 2 00:21:32.665 } 00:21:32.665 ], 00:21:32.665 "driver_specific": {} 00:21:32.665 } 00:21:32.665 ] 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.665 BaseBdev3 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.665 [ 00:21:32.665 { 00:21:32.665 "name": "BaseBdev3", 00:21:32.665 "aliases": [ 00:21:32.665 "e4b67e49-f241-4adf-98a4-64e91e8bb39b" 00:21:32.665 ], 00:21:32.665 "product_name": "Malloc disk", 00:21:32.665 "block_size": 512, 00:21:32.665 "num_blocks": 65536, 00:21:32.665 "uuid": "e4b67e49-f241-4adf-98a4-64e91e8bb39b", 00:21:32.665 "assigned_rate_limits": { 00:21:32.665 "rw_ios_per_sec": 0, 00:21:32.665 "rw_mbytes_per_sec": 0, 00:21:32.665 "r_mbytes_per_sec": 0, 00:21:32.665 "w_mbytes_per_sec": 0 00:21:32.665 }, 00:21:32.665 "claimed": false, 00:21:32.665 "zoned": false, 00:21:32.665 "supported_io_types": { 00:21:32.665 "read": true, 00:21:32.665 "write": true, 00:21:32.665 "unmap": true, 00:21:32.665 "flush": true, 00:21:32.665 "reset": true, 00:21:32.665 "nvme_admin": false, 00:21:32.665 "nvme_io": false, 00:21:32.665 "nvme_io_md": false, 00:21:32.665 "write_zeroes": true, 00:21:32.665 "zcopy": true, 00:21:32.665 "get_zone_info": false, 00:21:32.665 "zone_management": false, 00:21:32.665 "zone_append": false, 00:21:32.665 "compare": false, 00:21:32.665 "compare_and_write": false, 00:21:32.665 "abort": true, 00:21:32.665 "seek_hole": false, 00:21:32.665 "seek_data": false, 00:21:32.665 "copy": true, 00:21:32.665 "nvme_iov_md": false 00:21:32.665 }, 00:21:32.665 "memory_domains": [ 00:21:32.665 { 00:21:32.665 "dma_device_id": "system", 00:21:32.665 "dma_device_type": 1 00:21:32.665 }, 00:21:32.665 { 00:21:32.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:32.665 "dma_device_type": 2 00:21:32.665 } 00:21:32.665 ], 00:21:32.665 "driver_specific": {} 00:21:32.665 } 00:21:32.665 ] 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.665 BaseBdev4 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.665 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.665 [ 00:21:32.665 { 00:21:32.665 "name": "BaseBdev4", 00:21:32.665 "aliases": [ 00:21:32.665 "19fd8db7-691f-4cf0-92ed-8887b051e11f" 00:21:32.665 ], 00:21:32.665 "product_name": "Malloc disk", 00:21:32.665 "block_size": 512, 00:21:32.665 "num_blocks": 65536, 00:21:32.665 "uuid": "19fd8db7-691f-4cf0-92ed-8887b051e11f", 00:21:32.665 "assigned_rate_limits": { 00:21:32.665 "rw_ios_per_sec": 0, 00:21:32.665 "rw_mbytes_per_sec": 0, 00:21:32.665 "r_mbytes_per_sec": 0, 00:21:32.665 "w_mbytes_per_sec": 0 00:21:32.665 }, 00:21:32.665 "claimed": false, 00:21:32.665 "zoned": false, 00:21:32.665 "supported_io_types": { 00:21:32.665 "read": true, 00:21:32.665 "write": true, 00:21:32.665 "unmap": true, 00:21:32.665 "flush": true, 00:21:32.665 "reset": true, 00:21:32.665 "nvme_admin": false, 00:21:32.665 "nvme_io": false, 00:21:32.665 "nvme_io_md": false, 00:21:32.665 "write_zeroes": true, 00:21:32.665 "zcopy": true, 00:21:32.665 "get_zone_info": false, 00:21:32.665 "zone_management": false, 00:21:32.665 "zone_append": false, 00:21:32.665 "compare": false, 00:21:32.665 "compare_and_write": false, 00:21:32.665 "abort": true, 00:21:32.665 "seek_hole": false, 00:21:32.665 "seek_data": false, 00:21:32.665 "copy": true, 00:21:32.665 "nvme_iov_md": false 00:21:32.665 }, 00:21:32.665 "memory_domains": [ 00:21:32.665 { 00:21:32.665 "dma_device_id": "system", 00:21:32.665 "dma_device_type": 1 00:21:32.665 }, 00:21:32.665 { 00:21:32.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:32.666 "dma_device_type": 2 00:21:32.666 } 00:21:32.666 ], 00:21:32.666 "driver_specific": {} 00:21:32.666 } 00:21:32.666 ] 00:21:32.666 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.666 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:32.666 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:32.666 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:32.666 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:32.666 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.666 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.666 [2024-12-05 12:53:15.190352] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:32.666 [2024-12-05 12:53:15.190556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:32.666 [2024-12-05 12:53:15.190647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:32.666 [2024-12-05 12:53:15.192737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:32.666 [2024-12-05 12:53:15.192878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:32.666 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.666 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:32.666 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:32.666 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:32.666 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:32.666 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:32.666 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:32.666 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:32.666 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:32.666 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:32.666 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:32.666 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.666 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:32.666 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.666 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.666 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.666 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:32.666 "name": "Existed_Raid", 00:21:32.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.666 "strip_size_kb": 64, 00:21:32.666 "state": "configuring", 00:21:32.666 "raid_level": "raid0", 00:21:32.666 "superblock": false, 00:21:32.666 "num_base_bdevs": 4, 00:21:32.666 "num_base_bdevs_discovered": 3, 00:21:32.666 "num_base_bdevs_operational": 4, 00:21:32.666 "base_bdevs_list": [ 00:21:32.666 { 00:21:32.666 "name": "BaseBdev1", 00:21:32.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.666 "is_configured": false, 00:21:32.666 "data_offset": 0, 00:21:32.666 "data_size": 0 00:21:32.666 }, 00:21:32.666 { 00:21:32.666 "name": "BaseBdev2", 00:21:32.666 "uuid": "ae54b333-3234-4239-930f-4c3989eadedf", 00:21:32.666 "is_configured": true, 00:21:32.666 "data_offset": 0, 00:21:32.666 "data_size": 65536 00:21:32.666 }, 00:21:32.666 { 00:21:32.666 "name": "BaseBdev3", 00:21:32.666 "uuid": "e4b67e49-f241-4adf-98a4-64e91e8bb39b", 00:21:32.666 "is_configured": true, 00:21:32.666 "data_offset": 0, 00:21:32.666 "data_size": 65536 00:21:32.666 }, 00:21:32.666 { 00:21:32.666 "name": "BaseBdev4", 00:21:32.666 "uuid": "19fd8db7-691f-4cf0-92ed-8887b051e11f", 00:21:32.666 "is_configured": true, 00:21:32.666 "data_offset": 0, 00:21:32.666 "data_size": 65536 00:21:32.666 } 00:21:32.666 ] 00:21:32.666 }' 00:21:32.666 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:32.666 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.925 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:32.925 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.925 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.925 [2024-12-05 12:53:15.490370] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:32.925 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.925 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:32.925 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:32.925 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:32.925 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:32.925 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:32.925 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:32.925 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:32.925 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:32.925 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:32.925 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:32.925 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.925 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:32.925 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.925 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.184 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.184 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:33.184 "name": "Existed_Raid", 00:21:33.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.184 "strip_size_kb": 64, 00:21:33.184 "state": "configuring", 00:21:33.184 "raid_level": "raid0", 00:21:33.184 "superblock": false, 00:21:33.184 "num_base_bdevs": 4, 00:21:33.184 "num_base_bdevs_discovered": 2, 00:21:33.184 "num_base_bdevs_operational": 4, 00:21:33.184 "base_bdevs_list": [ 00:21:33.184 { 00:21:33.184 "name": "BaseBdev1", 00:21:33.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.184 "is_configured": false, 00:21:33.184 "data_offset": 0, 00:21:33.184 "data_size": 0 00:21:33.184 }, 00:21:33.184 { 00:21:33.184 "name": null, 00:21:33.184 "uuid": "ae54b333-3234-4239-930f-4c3989eadedf", 00:21:33.184 "is_configured": false, 00:21:33.184 "data_offset": 0, 00:21:33.184 "data_size": 65536 00:21:33.184 }, 00:21:33.184 { 00:21:33.184 "name": "BaseBdev3", 00:21:33.184 "uuid": "e4b67e49-f241-4adf-98a4-64e91e8bb39b", 00:21:33.184 "is_configured": true, 00:21:33.184 "data_offset": 0, 00:21:33.184 "data_size": 65536 00:21:33.184 }, 00:21:33.184 { 00:21:33.184 "name": "BaseBdev4", 00:21:33.184 "uuid": "19fd8db7-691f-4cf0-92ed-8887b051e11f", 00:21:33.184 "is_configured": true, 00:21:33.184 "data_offset": 0, 00:21:33.184 "data_size": 65536 00:21:33.184 } 00:21:33.184 ] 00:21:33.184 }' 00:21:33.184 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:33.184 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.444 [2024-12-05 12:53:15.832601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:33.444 BaseBdev1 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.444 [ 00:21:33.444 { 00:21:33.444 "name": "BaseBdev1", 00:21:33.444 "aliases": [ 00:21:33.444 "65b1da73-d1e2-4391-a5b5-17188d372ae9" 00:21:33.444 ], 00:21:33.444 "product_name": "Malloc disk", 00:21:33.444 "block_size": 512, 00:21:33.444 "num_blocks": 65536, 00:21:33.444 "uuid": "65b1da73-d1e2-4391-a5b5-17188d372ae9", 00:21:33.444 "assigned_rate_limits": { 00:21:33.444 "rw_ios_per_sec": 0, 00:21:33.444 "rw_mbytes_per_sec": 0, 00:21:33.444 "r_mbytes_per_sec": 0, 00:21:33.444 "w_mbytes_per_sec": 0 00:21:33.444 }, 00:21:33.444 "claimed": true, 00:21:33.444 "claim_type": "exclusive_write", 00:21:33.444 "zoned": false, 00:21:33.444 "supported_io_types": { 00:21:33.444 "read": true, 00:21:33.444 "write": true, 00:21:33.444 "unmap": true, 00:21:33.444 "flush": true, 00:21:33.444 "reset": true, 00:21:33.444 "nvme_admin": false, 00:21:33.444 "nvme_io": false, 00:21:33.444 "nvme_io_md": false, 00:21:33.444 "write_zeroes": true, 00:21:33.444 "zcopy": true, 00:21:33.444 "get_zone_info": false, 00:21:33.444 "zone_management": false, 00:21:33.444 "zone_append": false, 00:21:33.444 "compare": false, 00:21:33.444 "compare_and_write": false, 00:21:33.444 "abort": true, 00:21:33.444 "seek_hole": false, 00:21:33.444 "seek_data": false, 00:21:33.444 "copy": true, 00:21:33.444 "nvme_iov_md": false 00:21:33.444 }, 00:21:33.444 "memory_domains": [ 00:21:33.444 { 00:21:33.444 "dma_device_id": "system", 00:21:33.444 "dma_device_type": 1 00:21:33.444 }, 00:21:33.444 { 00:21:33.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:33.444 "dma_device_type": 2 00:21:33.444 } 00:21:33.444 ], 00:21:33.444 "driver_specific": {} 00:21:33.444 } 00:21:33.444 ] 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.444 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:33.444 "name": "Existed_Raid", 00:21:33.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.444 "strip_size_kb": 64, 00:21:33.444 "state": "configuring", 00:21:33.444 "raid_level": "raid0", 00:21:33.444 "superblock": false, 00:21:33.444 "num_base_bdevs": 4, 00:21:33.444 "num_base_bdevs_discovered": 3, 00:21:33.444 "num_base_bdevs_operational": 4, 00:21:33.444 "base_bdevs_list": [ 00:21:33.444 { 00:21:33.444 "name": "BaseBdev1", 00:21:33.444 "uuid": "65b1da73-d1e2-4391-a5b5-17188d372ae9", 00:21:33.444 "is_configured": true, 00:21:33.444 "data_offset": 0, 00:21:33.444 "data_size": 65536 00:21:33.444 }, 00:21:33.444 { 00:21:33.444 "name": null, 00:21:33.444 "uuid": "ae54b333-3234-4239-930f-4c3989eadedf", 00:21:33.444 "is_configured": false, 00:21:33.444 "data_offset": 0, 00:21:33.444 "data_size": 65536 00:21:33.444 }, 00:21:33.444 { 00:21:33.444 "name": "BaseBdev3", 00:21:33.444 "uuid": "e4b67e49-f241-4adf-98a4-64e91e8bb39b", 00:21:33.444 "is_configured": true, 00:21:33.444 "data_offset": 0, 00:21:33.444 "data_size": 65536 00:21:33.445 }, 00:21:33.445 { 00:21:33.445 "name": "BaseBdev4", 00:21:33.445 "uuid": "19fd8db7-691f-4cf0-92ed-8887b051e11f", 00:21:33.445 "is_configured": true, 00:21:33.445 "data_offset": 0, 00:21:33.445 "data_size": 65536 00:21:33.445 } 00:21:33.445 ] 00:21:33.445 }' 00:21:33.445 12:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:33.445 12:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.704 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:33.704 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.704 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.704 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.704 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.704 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:21:33.704 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:21:33.704 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.704 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.704 [2024-12-05 12:53:16.236765] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:33.704 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.704 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:33.704 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:33.704 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:33.704 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:33.704 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:33.704 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:33.704 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:33.704 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:33.704 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:33.704 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:33.704 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.704 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:33.704 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.704 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.704 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.705 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:33.705 "name": "Existed_Raid", 00:21:33.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.705 "strip_size_kb": 64, 00:21:33.705 "state": "configuring", 00:21:33.705 "raid_level": "raid0", 00:21:33.705 "superblock": false, 00:21:33.705 "num_base_bdevs": 4, 00:21:33.705 "num_base_bdevs_discovered": 2, 00:21:33.705 "num_base_bdevs_operational": 4, 00:21:33.705 "base_bdevs_list": [ 00:21:33.705 { 00:21:33.705 "name": "BaseBdev1", 00:21:33.705 "uuid": "65b1da73-d1e2-4391-a5b5-17188d372ae9", 00:21:33.705 "is_configured": true, 00:21:33.705 "data_offset": 0, 00:21:33.705 "data_size": 65536 00:21:33.705 }, 00:21:33.705 { 00:21:33.705 "name": null, 00:21:33.705 "uuid": "ae54b333-3234-4239-930f-4c3989eadedf", 00:21:33.705 "is_configured": false, 00:21:33.705 "data_offset": 0, 00:21:33.705 "data_size": 65536 00:21:33.705 }, 00:21:33.705 { 00:21:33.705 "name": null, 00:21:33.705 "uuid": "e4b67e49-f241-4adf-98a4-64e91e8bb39b", 00:21:33.705 "is_configured": false, 00:21:33.705 "data_offset": 0, 00:21:33.705 "data_size": 65536 00:21:33.705 }, 00:21:33.705 { 00:21:33.705 "name": "BaseBdev4", 00:21:33.705 "uuid": "19fd8db7-691f-4cf0-92ed-8887b051e11f", 00:21:33.705 "is_configured": true, 00:21:33.705 "data_offset": 0, 00:21:33.705 "data_size": 65536 00:21:33.705 } 00:21:33.705 ] 00:21:33.705 }' 00:21:33.705 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:33.705 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.965 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.965 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:33.965 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.965 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.225 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.225 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:21:34.225 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:34.225 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.225 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.225 [2024-12-05 12:53:16.580844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:34.225 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.225 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:34.225 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:34.225 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:34.225 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:34.225 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:34.225 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:34.225 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:34.225 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:34.225 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:34.225 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:34.225 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:34.225 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.225 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.225 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.225 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.225 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:34.225 "name": "Existed_Raid", 00:21:34.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.225 "strip_size_kb": 64, 00:21:34.225 "state": "configuring", 00:21:34.225 "raid_level": "raid0", 00:21:34.225 "superblock": false, 00:21:34.225 "num_base_bdevs": 4, 00:21:34.225 "num_base_bdevs_discovered": 3, 00:21:34.225 "num_base_bdevs_operational": 4, 00:21:34.225 "base_bdevs_list": [ 00:21:34.225 { 00:21:34.225 "name": "BaseBdev1", 00:21:34.225 "uuid": "65b1da73-d1e2-4391-a5b5-17188d372ae9", 00:21:34.225 "is_configured": true, 00:21:34.225 "data_offset": 0, 00:21:34.225 "data_size": 65536 00:21:34.225 }, 00:21:34.225 { 00:21:34.225 "name": null, 00:21:34.226 "uuid": "ae54b333-3234-4239-930f-4c3989eadedf", 00:21:34.226 "is_configured": false, 00:21:34.226 "data_offset": 0, 00:21:34.226 "data_size": 65536 00:21:34.226 }, 00:21:34.226 { 00:21:34.226 "name": "BaseBdev3", 00:21:34.226 "uuid": "e4b67e49-f241-4adf-98a4-64e91e8bb39b", 00:21:34.226 "is_configured": true, 00:21:34.226 "data_offset": 0, 00:21:34.226 "data_size": 65536 00:21:34.226 }, 00:21:34.226 { 00:21:34.226 "name": "BaseBdev4", 00:21:34.226 "uuid": "19fd8db7-691f-4cf0-92ed-8887b051e11f", 00:21:34.226 "is_configured": true, 00:21:34.226 "data_offset": 0, 00:21:34.226 "data_size": 65536 00:21:34.226 } 00:21:34.226 ] 00:21:34.226 }' 00:21:34.226 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:34.226 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.486 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.486 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.487 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.487 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:34.487 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.487 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:21:34.487 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:34.487 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.487 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.487 [2024-12-05 12:53:16.932964] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:34.487 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.487 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:34.487 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:34.487 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:34.487 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:34.487 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:34.487 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:34.487 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:34.487 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:34.487 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:34.487 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:34.487 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.487 12:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:34.487 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.487 12:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.487 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.487 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:34.487 "name": "Existed_Raid", 00:21:34.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.487 "strip_size_kb": 64, 00:21:34.487 "state": "configuring", 00:21:34.487 "raid_level": "raid0", 00:21:34.487 "superblock": false, 00:21:34.487 "num_base_bdevs": 4, 00:21:34.487 "num_base_bdevs_discovered": 2, 00:21:34.487 "num_base_bdevs_operational": 4, 00:21:34.487 "base_bdevs_list": [ 00:21:34.487 { 00:21:34.487 "name": null, 00:21:34.487 "uuid": "65b1da73-d1e2-4391-a5b5-17188d372ae9", 00:21:34.487 "is_configured": false, 00:21:34.487 "data_offset": 0, 00:21:34.487 "data_size": 65536 00:21:34.487 }, 00:21:34.487 { 00:21:34.487 "name": null, 00:21:34.487 "uuid": "ae54b333-3234-4239-930f-4c3989eadedf", 00:21:34.487 "is_configured": false, 00:21:34.487 "data_offset": 0, 00:21:34.487 "data_size": 65536 00:21:34.487 }, 00:21:34.487 { 00:21:34.487 "name": "BaseBdev3", 00:21:34.487 "uuid": "e4b67e49-f241-4adf-98a4-64e91e8bb39b", 00:21:34.487 "is_configured": true, 00:21:34.487 "data_offset": 0, 00:21:34.487 "data_size": 65536 00:21:34.487 }, 00:21:34.487 { 00:21:34.487 "name": "BaseBdev4", 00:21:34.487 "uuid": "19fd8db7-691f-4cf0-92ed-8887b051e11f", 00:21:34.487 "is_configured": true, 00:21:34.487 "data_offset": 0, 00:21:34.487 "data_size": 65536 00:21:34.487 } 00:21:34.487 ] 00:21:34.487 }' 00:21:34.487 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:34.487 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.748 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.748 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:34.748 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.748 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.010 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.010 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:21:35.010 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:35.010 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.010 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.010 [2024-12-05 12:53:17.356594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:35.010 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.010 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:35.010 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:35.010 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:35.010 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:35.010 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:35.010 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:35.010 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:35.010 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:35.010 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:35.010 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:35.010 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.010 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.010 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.010 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:35.010 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.010 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:35.010 "name": "Existed_Raid", 00:21:35.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.010 "strip_size_kb": 64, 00:21:35.010 "state": "configuring", 00:21:35.010 "raid_level": "raid0", 00:21:35.010 "superblock": false, 00:21:35.010 "num_base_bdevs": 4, 00:21:35.010 "num_base_bdevs_discovered": 3, 00:21:35.010 "num_base_bdevs_operational": 4, 00:21:35.010 "base_bdevs_list": [ 00:21:35.010 { 00:21:35.010 "name": null, 00:21:35.010 "uuid": "65b1da73-d1e2-4391-a5b5-17188d372ae9", 00:21:35.010 "is_configured": false, 00:21:35.010 "data_offset": 0, 00:21:35.010 "data_size": 65536 00:21:35.010 }, 00:21:35.010 { 00:21:35.010 "name": "BaseBdev2", 00:21:35.010 "uuid": "ae54b333-3234-4239-930f-4c3989eadedf", 00:21:35.010 "is_configured": true, 00:21:35.010 "data_offset": 0, 00:21:35.010 "data_size": 65536 00:21:35.010 }, 00:21:35.010 { 00:21:35.010 "name": "BaseBdev3", 00:21:35.010 "uuid": "e4b67e49-f241-4adf-98a4-64e91e8bb39b", 00:21:35.010 "is_configured": true, 00:21:35.010 "data_offset": 0, 00:21:35.010 "data_size": 65536 00:21:35.010 }, 00:21:35.010 { 00:21:35.010 "name": "BaseBdev4", 00:21:35.010 "uuid": "19fd8db7-691f-4cf0-92ed-8887b051e11f", 00:21:35.010 "is_configured": true, 00:21:35.010 "data_offset": 0, 00:21:35.010 "data_size": 65536 00:21:35.010 } 00:21:35.010 ] 00:21:35.010 }' 00:21:35.010 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:35.010 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.271 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 65b1da73-d1e2-4391-a5b5-17188d372ae9 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.272 [2024-12-05 12:53:17.791034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:35.272 [2024-12-05 12:53:17.791083] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:35.272 [2024-12-05 12:53:17.791090] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:21:35.272 [2024-12-05 12:53:17.791343] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:21:35.272 [2024-12-05 12:53:17.791478] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:35.272 [2024-12-05 12:53:17.791501] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:21:35.272 [2024-12-05 12:53:17.791736] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:35.272 NewBaseBdev 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.272 [ 00:21:35.272 { 00:21:35.272 "name": "NewBaseBdev", 00:21:35.272 "aliases": [ 00:21:35.272 "65b1da73-d1e2-4391-a5b5-17188d372ae9" 00:21:35.272 ], 00:21:35.272 "product_name": "Malloc disk", 00:21:35.272 "block_size": 512, 00:21:35.272 "num_blocks": 65536, 00:21:35.272 "uuid": "65b1da73-d1e2-4391-a5b5-17188d372ae9", 00:21:35.272 "assigned_rate_limits": { 00:21:35.272 "rw_ios_per_sec": 0, 00:21:35.272 "rw_mbytes_per_sec": 0, 00:21:35.272 "r_mbytes_per_sec": 0, 00:21:35.272 "w_mbytes_per_sec": 0 00:21:35.272 }, 00:21:35.272 "claimed": true, 00:21:35.272 "claim_type": "exclusive_write", 00:21:35.272 "zoned": false, 00:21:35.272 "supported_io_types": { 00:21:35.272 "read": true, 00:21:35.272 "write": true, 00:21:35.272 "unmap": true, 00:21:35.272 "flush": true, 00:21:35.272 "reset": true, 00:21:35.272 "nvme_admin": false, 00:21:35.272 "nvme_io": false, 00:21:35.272 "nvme_io_md": false, 00:21:35.272 "write_zeroes": true, 00:21:35.272 "zcopy": true, 00:21:35.272 "get_zone_info": false, 00:21:35.272 "zone_management": false, 00:21:35.272 "zone_append": false, 00:21:35.272 "compare": false, 00:21:35.272 "compare_and_write": false, 00:21:35.272 "abort": true, 00:21:35.272 "seek_hole": false, 00:21:35.272 "seek_data": false, 00:21:35.272 "copy": true, 00:21:35.272 "nvme_iov_md": false 00:21:35.272 }, 00:21:35.272 "memory_domains": [ 00:21:35.272 { 00:21:35.272 "dma_device_id": "system", 00:21:35.272 "dma_device_type": 1 00:21:35.272 }, 00:21:35.272 { 00:21:35.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:35.272 "dma_device_type": 2 00:21:35.272 } 00:21:35.272 ], 00:21:35.272 "driver_specific": {} 00:21:35.272 } 00:21:35.272 ] 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:35.272 "name": "Existed_Raid", 00:21:35.272 "uuid": "6c67febe-4ea8-4994-97f3-cf4d2e243a22", 00:21:35.272 "strip_size_kb": 64, 00:21:35.272 "state": "online", 00:21:35.272 "raid_level": "raid0", 00:21:35.272 "superblock": false, 00:21:35.272 "num_base_bdevs": 4, 00:21:35.272 "num_base_bdevs_discovered": 4, 00:21:35.272 "num_base_bdevs_operational": 4, 00:21:35.272 "base_bdevs_list": [ 00:21:35.272 { 00:21:35.272 "name": "NewBaseBdev", 00:21:35.272 "uuid": "65b1da73-d1e2-4391-a5b5-17188d372ae9", 00:21:35.272 "is_configured": true, 00:21:35.272 "data_offset": 0, 00:21:35.272 "data_size": 65536 00:21:35.272 }, 00:21:35.272 { 00:21:35.272 "name": "BaseBdev2", 00:21:35.272 "uuid": "ae54b333-3234-4239-930f-4c3989eadedf", 00:21:35.272 "is_configured": true, 00:21:35.272 "data_offset": 0, 00:21:35.272 "data_size": 65536 00:21:35.272 }, 00:21:35.272 { 00:21:35.272 "name": "BaseBdev3", 00:21:35.272 "uuid": "e4b67e49-f241-4adf-98a4-64e91e8bb39b", 00:21:35.272 "is_configured": true, 00:21:35.272 "data_offset": 0, 00:21:35.272 "data_size": 65536 00:21:35.272 }, 00:21:35.272 { 00:21:35.272 "name": "BaseBdev4", 00:21:35.272 "uuid": "19fd8db7-691f-4cf0-92ed-8887b051e11f", 00:21:35.272 "is_configured": true, 00:21:35.272 "data_offset": 0, 00:21:35.272 "data_size": 65536 00:21:35.272 } 00:21:35.272 ] 00:21:35.272 }' 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:35.272 12:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.534 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:21:35.534 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:35.534 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:35.534 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:35.534 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:35.534 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.796 [2024-12-05 12:53:18.123550] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:35.796 "name": "Existed_Raid", 00:21:35.796 "aliases": [ 00:21:35.796 "6c67febe-4ea8-4994-97f3-cf4d2e243a22" 00:21:35.796 ], 00:21:35.796 "product_name": "Raid Volume", 00:21:35.796 "block_size": 512, 00:21:35.796 "num_blocks": 262144, 00:21:35.796 "uuid": "6c67febe-4ea8-4994-97f3-cf4d2e243a22", 00:21:35.796 "assigned_rate_limits": { 00:21:35.796 "rw_ios_per_sec": 0, 00:21:35.796 "rw_mbytes_per_sec": 0, 00:21:35.796 "r_mbytes_per_sec": 0, 00:21:35.796 "w_mbytes_per_sec": 0 00:21:35.796 }, 00:21:35.796 "claimed": false, 00:21:35.796 "zoned": false, 00:21:35.796 "supported_io_types": { 00:21:35.796 "read": true, 00:21:35.796 "write": true, 00:21:35.796 "unmap": true, 00:21:35.796 "flush": true, 00:21:35.796 "reset": true, 00:21:35.796 "nvme_admin": false, 00:21:35.796 "nvme_io": false, 00:21:35.796 "nvme_io_md": false, 00:21:35.796 "write_zeroes": true, 00:21:35.796 "zcopy": false, 00:21:35.796 "get_zone_info": false, 00:21:35.796 "zone_management": false, 00:21:35.796 "zone_append": false, 00:21:35.796 "compare": false, 00:21:35.796 "compare_and_write": false, 00:21:35.796 "abort": false, 00:21:35.796 "seek_hole": false, 00:21:35.796 "seek_data": false, 00:21:35.796 "copy": false, 00:21:35.796 "nvme_iov_md": false 00:21:35.796 }, 00:21:35.796 "memory_domains": [ 00:21:35.796 { 00:21:35.796 "dma_device_id": "system", 00:21:35.796 "dma_device_type": 1 00:21:35.796 }, 00:21:35.796 { 00:21:35.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:35.796 "dma_device_type": 2 00:21:35.796 }, 00:21:35.796 { 00:21:35.796 "dma_device_id": "system", 00:21:35.796 "dma_device_type": 1 00:21:35.796 }, 00:21:35.796 { 00:21:35.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:35.796 "dma_device_type": 2 00:21:35.796 }, 00:21:35.796 { 00:21:35.796 "dma_device_id": "system", 00:21:35.796 "dma_device_type": 1 00:21:35.796 }, 00:21:35.796 { 00:21:35.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:35.796 "dma_device_type": 2 00:21:35.796 }, 00:21:35.796 { 00:21:35.796 "dma_device_id": "system", 00:21:35.796 "dma_device_type": 1 00:21:35.796 }, 00:21:35.796 { 00:21:35.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:35.796 "dma_device_type": 2 00:21:35.796 } 00:21:35.796 ], 00:21:35.796 "driver_specific": { 00:21:35.796 "raid": { 00:21:35.796 "uuid": "6c67febe-4ea8-4994-97f3-cf4d2e243a22", 00:21:35.796 "strip_size_kb": 64, 00:21:35.796 "state": "online", 00:21:35.796 "raid_level": "raid0", 00:21:35.796 "superblock": false, 00:21:35.796 "num_base_bdevs": 4, 00:21:35.796 "num_base_bdevs_discovered": 4, 00:21:35.796 "num_base_bdevs_operational": 4, 00:21:35.796 "base_bdevs_list": [ 00:21:35.796 { 00:21:35.796 "name": "NewBaseBdev", 00:21:35.796 "uuid": "65b1da73-d1e2-4391-a5b5-17188d372ae9", 00:21:35.796 "is_configured": true, 00:21:35.796 "data_offset": 0, 00:21:35.796 "data_size": 65536 00:21:35.796 }, 00:21:35.796 { 00:21:35.796 "name": "BaseBdev2", 00:21:35.796 "uuid": "ae54b333-3234-4239-930f-4c3989eadedf", 00:21:35.796 "is_configured": true, 00:21:35.796 "data_offset": 0, 00:21:35.796 "data_size": 65536 00:21:35.796 }, 00:21:35.796 { 00:21:35.796 "name": "BaseBdev3", 00:21:35.796 "uuid": "e4b67e49-f241-4adf-98a4-64e91e8bb39b", 00:21:35.796 "is_configured": true, 00:21:35.796 "data_offset": 0, 00:21:35.796 "data_size": 65536 00:21:35.796 }, 00:21:35.796 { 00:21:35.796 "name": "BaseBdev4", 00:21:35.796 "uuid": "19fd8db7-691f-4cf0-92ed-8887b051e11f", 00:21:35.796 "is_configured": true, 00:21:35.796 "data_offset": 0, 00:21:35.796 "data_size": 65536 00:21:35.796 } 00:21:35.796 ] 00:21:35.796 } 00:21:35.796 } 00:21:35.796 }' 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:21:35.796 BaseBdev2 00:21:35.796 BaseBdev3 00:21:35.796 BaseBdev4' 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.796 [2024-12-05 12:53:18.367229] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:35.796 [2024-12-05 12:53:18.367262] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:35.796 [2024-12-05 12:53:18.367327] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:35.796 [2024-12-05 12:53:18.367394] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:35.796 [2024-12-05 12:53:18.367403] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67503 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67503 ']' 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67503 00:21:35.796 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:21:35.797 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:35.797 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67503 00:21:36.086 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:36.086 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:36.086 killing process with pid 67503 00:21:36.086 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67503' 00:21:36.086 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67503 00:21:36.086 [2024-12-05 12:53:18.400780] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:36.086 12:53:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67503 00:21:36.086 [2024-12-05 12:53:18.648078] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:21:37.024 00:21:37.024 real 0m8.397s 00:21:37.024 user 0m13.358s 00:21:37.024 sys 0m1.237s 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:37.024 ************************************ 00:21:37.024 END TEST raid_state_function_test 00:21:37.024 ************************************ 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.024 12:53:19 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:21:37.024 12:53:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:37.024 12:53:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:37.024 12:53:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:37.024 ************************************ 00:21:37.024 START TEST raid_state_function_test_sb 00:21:37.024 ************************************ 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:37.024 Process raid pid: 68147 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68147 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68147' 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68147 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68147 ']' 00:21:37.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:37.024 12:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:37.024 [2024-12-05 12:53:19.491194] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:21:37.024 [2024-12-05 12:53:19.491314] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:37.281 [2024-12-05 12:53:19.700157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.281 [2024-12-05 12:53:19.805927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.540 [2024-12-05 12:53:19.942451] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:37.540 [2024-12-05 12:53:19.942482] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:37.801 12:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:37.801 12:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:21:37.801 12:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:37.801 12:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.801 12:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:37.801 [2024-12-05 12:53:20.344872] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:37.801 [2024-12-05 12:53:20.344929] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:37.801 [2024-12-05 12:53:20.344938] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:37.801 [2024-12-05 12:53:20.344948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:37.801 [2024-12-05 12:53:20.344954] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:37.801 [2024-12-05 12:53:20.344963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:37.801 [2024-12-05 12:53:20.344970] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:37.801 [2024-12-05 12:53:20.344978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:37.801 12:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.801 12:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:37.801 12:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:37.801 12:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:37.801 12:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:37.801 12:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:37.801 12:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:37.801 12:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:37.801 12:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:37.801 12:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:37.801 12:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:37.801 12:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.801 12:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:37.801 12:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.801 12:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:37.801 12:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.801 12:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:37.801 "name": "Existed_Raid", 00:21:37.801 "uuid": "b02613ad-5b30-41ba-98a4-a758cd56be47", 00:21:37.801 "strip_size_kb": 64, 00:21:37.801 "state": "configuring", 00:21:37.801 "raid_level": "raid0", 00:21:37.801 "superblock": true, 00:21:37.801 "num_base_bdevs": 4, 00:21:37.801 "num_base_bdevs_discovered": 0, 00:21:37.801 "num_base_bdevs_operational": 4, 00:21:37.801 "base_bdevs_list": [ 00:21:37.801 { 00:21:37.801 "name": "BaseBdev1", 00:21:37.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.801 "is_configured": false, 00:21:37.801 "data_offset": 0, 00:21:37.801 "data_size": 0 00:21:37.801 }, 00:21:37.801 { 00:21:37.801 "name": "BaseBdev2", 00:21:37.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.801 "is_configured": false, 00:21:37.801 "data_offset": 0, 00:21:37.801 "data_size": 0 00:21:37.801 }, 00:21:37.801 { 00:21:37.801 "name": "BaseBdev3", 00:21:37.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.801 "is_configured": false, 00:21:37.801 "data_offset": 0, 00:21:37.801 "data_size": 0 00:21:37.801 }, 00:21:37.801 { 00:21:37.801 "name": "BaseBdev4", 00:21:37.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.801 "is_configured": false, 00:21:37.801 "data_offset": 0, 00:21:37.801 "data_size": 0 00:21:37.801 } 00:21:37.801 ] 00:21:37.801 }' 00:21:37.801 12:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:37.801 12:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.061 12:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:38.061 12:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.061 12:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.061 [2024-12-05 12:53:20.640882] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:38.061 [2024-12-05 12:53:20.640921] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.321 [2024-12-05 12:53:20.652907] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:38.321 [2024-12-05 12:53:20.652947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:38.321 [2024-12-05 12:53:20.652955] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:38.321 [2024-12-05 12:53:20.652964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:38.321 [2024-12-05 12:53:20.652970] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:38.321 [2024-12-05 12:53:20.652979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:38.321 [2024-12-05 12:53:20.652985] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:38.321 [2024-12-05 12:53:20.652993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.321 [2024-12-05 12:53:20.685204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:38.321 BaseBdev1 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.321 [ 00:21:38.321 { 00:21:38.321 "name": "BaseBdev1", 00:21:38.321 "aliases": [ 00:21:38.321 "a663a558-513f-4713-8b01-954dbbb8d380" 00:21:38.321 ], 00:21:38.321 "product_name": "Malloc disk", 00:21:38.321 "block_size": 512, 00:21:38.321 "num_blocks": 65536, 00:21:38.321 "uuid": "a663a558-513f-4713-8b01-954dbbb8d380", 00:21:38.321 "assigned_rate_limits": { 00:21:38.321 "rw_ios_per_sec": 0, 00:21:38.321 "rw_mbytes_per_sec": 0, 00:21:38.321 "r_mbytes_per_sec": 0, 00:21:38.321 "w_mbytes_per_sec": 0 00:21:38.321 }, 00:21:38.321 "claimed": true, 00:21:38.321 "claim_type": "exclusive_write", 00:21:38.321 "zoned": false, 00:21:38.321 "supported_io_types": { 00:21:38.321 "read": true, 00:21:38.321 "write": true, 00:21:38.321 "unmap": true, 00:21:38.321 "flush": true, 00:21:38.321 "reset": true, 00:21:38.321 "nvme_admin": false, 00:21:38.321 "nvme_io": false, 00:21:38.321 "nvme_io_md": false, 00:21:38.321 "write_zeroes": true, 00:21:38.321 "zcopy": true, 00:21:38.321 "get_zone_info": false, 00:21:38.321 "zone_management": false, 00:21:38.321 "zone_append": false, 00:21:38.321 "compare": false, 00:21:38.321 "compare_and_write": false, 00:21:38.321 "abort": true, 00:21:38.321 "seek_hole": false, 00:21:38.321 "seek_data": false, 00:21:38.321 "copy": true, 00:21:38.321 "nvme_iov_md": false 00:21:38.321 }, 00:21:38.321 "memory_domains": [ 00:21:38.321 { 00:21:38.321 "dma_device_id": "system", 00:21:38.321 "dma_device_type": 1 00:21:38.321 }, 00:21:38.321 { 00:21:38.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:38.321 "dma_device_type": 2 00:21:38.321 } 00:21:38.321 ], 00:21:38.321 "driver_specific": {} 00:21:38.321 } 00:21:38.321 ] 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:38.321 "name": "Existed_Raid", 00:21:38.321 "uuid": "bc886443-d2a5-40bc-addc-c52f241a6248", 00:21:38.321 "strip_size_kb": 64, 00:21:38.321 "state": "configuring", 00:21:38.321 "raid_level": "raid0", 00:21:38.321 "superblock": true, 00:21:38.321 "num_base_bdevs": 4, 00:21:38.321 "num_base_bdevs_discovered": 1, 00:21:38.321 "num_base_bdevs_operational": 4, 00:21:38.321 "base_bdevs_list": [ 00:21:38.321 { 00:21:38.321 "name": "BaseBdev1", 00:21:38.321 "uuid": "a663a558-513f-4713-8b01-954dbbb8d380", 00:21:38.321 "is_configured": true, 00:21:38.321 "data_offset": 2048, 00:21:38.321 "data_size": 63488 00:21:38.321 }, 00:21:38.321 { 00:21:38.321 "name": "BaseBdev2", 00:21:38.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.321 "is_configured": false, 00:21:38.321 "data_offset": 0, 00:21:38.321 "data_size": 0 00:21:38.321 }, 00:21:38.321 { 00:21:38.321 "name": "BaseBdev3", 00:21:38.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.321 "is_configured": false, 00:21:38.321 "data_offset": 0, 00:21:38.321 "data_size": 0 00:21:38.321 }, 00:21:38.321 { 00:21:38.321 "name": "BaseBdev4", 00:21:38.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.321 "is_configured": false, 00:21:38.321 "data_offset": 0, 00:21:38.321 "data_size": 0 00:21:38.321 } 00:21:38.321 ] 00:21:38.321 }' 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:38.321 12:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.582 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:38.582 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.582 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.582 [2024-12-05 12:53:21.057351] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:38.582 [2024-12-05 12:53:21.057403] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:38.582 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.582 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:38.582 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.582 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.582 [2024-12-05 12:53:21.065409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:38.582 [2024-12-05 12:53:21.067241] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:38.582 [2024-12-05 12:53:21.067284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:38.582 [2024-12-05 12:53:21.067294] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:38.582 [2024-12-05 12:53:21.067307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:38.582 [2024-12-05 12:53:21.067314] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:38.582 [2024-12-05 12:53:21.067322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:38.582 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.582 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:38.582 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:38.583 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:38.583 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:38.583 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:38.583 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:38.583 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:38.583 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:38.583 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:38.583 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:38.583 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:38.583 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:38.583 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.583 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.583 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.583 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:38.583 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.583 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:38.583 "name": "Existed_Raid", 00:21:38.583 "uuid": "c144d324-de07-42c6-878f-c14711d2cc1a", 00:21:38.583 "strip_size_kb": 64, 00:21:38.583 "state": "configuring", 00:21:38.583 "raid_level": "raid0", 00:21:38.583 "superblock": true, 00:21:38.583 "num_base_bdevs": 4, 00:21:38.583 "num_base_bdevs_discovered": 1, 00:21:38.583 "num_base_bdevs_operational": 4, 00:21:38.583 "base_bdevs_list": [ 00:21:38.583 { 00:21:38.583 "name": "BaseBdev1", 00:21:38.583 "uuid": "a663a558-513f-4713-8b01-954dbbb8d380", 00:21:38.583 "is_configured": true, 00:21:38.583 "data_offset": 2048, 00:21:38.583 "data_size": 63488 00:21:38.583 }, 00:21:38.583 { 00:21:38.583 "name": "BaseBdev2", 00:21:38.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.583 "is_configured": false, 00:21:38.583 "data_offset": 0, 00:21:38.583 "data_size": 0 00:21:38.583 }, 00:21:38.583 { 00:21:38.583 "name": "BaseBdev3", 00:21:38.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.583 "is_configured": false, 00:21:38.583 "data_offset": 0, 00:21:38.583 "data_size": 0 00:21:38.583 }, 00:21:38.583 { 00:21:38.583 "name": "BaseBdev4", 00:21:38.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.583 "is_configured": false, 00:21:38.583 "data_offset": 0, 00:21:38.583 "data_size": 0 00:21:38.583 } 00:21:38.583 ] 00:21:38.583 }' 00:21:38.583 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:38.583 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.844 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:38.844 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.844 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.107 [2024-12-05 12:53:21.440048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:39.107 BaseBdev2 00:21:39.107 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.107 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:39.107 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:39.107 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:39.107 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:39.107 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:39.107 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:39.107 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:39.107 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.107 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.107 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.107 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:39.107 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.107 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.107 [ 00:21:39.107 { 00:21:39.107 "name": "BaseBdev2", 00:21:39.107 "aliases": [ 00:21:39.107 "95105110-34e8-4ccb-b616-14d78d16e1ed" 00:21:39.107 ], 00:21:39.107 "product_name": "Malloc disk", 00:21:39.107 "block_size": 512, 00:21:39.107 "num_blocks": 65536, 00:21:39.107 "uuid": "95105110-34e8-4ccb-b616-14d78d16e1ed", 00:21:39.107 "assigned_rate_limits": { 00:21:39.107 "rw_ios_per_sec": 0, 00:21:39.107 "rw_mbytes_per_sec": 0, 00:21:39.107 "r_mbytes_per_sec": 0, 00:21:39.107 "w_mbytes_per_sec": 0 00:21:39.107 }, 00:21:39.107 "claimed": true, 00:21:39.107 "claim_type": "exclusive_write", 00:21:39.107 "zoned": false, 00:21:39.107 "supported_io_types": { 00:21:39.107 "read": true, 00:21:39.107 "write": true, 00:21:39.107 "unmap": true, 00:21:39.107 "flush": true, 00:21:39.107 "reset": true, 00:21:39.107 "nvme_admin": false, 00:21:39.107 "nvme_io": false, 00:21:39.107 "nvme_io_md": false, 00:21:39.107 "write_zeroes": true, 00:21:39.107 "zcopy": true, 00:21:39.107 "get_zone_info": false, 00:21:39.107 "zone_management": false, 00:21:39.107 "zone_append": false, 00:21:39.107 "compare": false, 00:21:39.107 "compare_and_write": false, 00:21:39.107 "abort": true, 00:21:39.107 "seek_hole": false, 00:21:39.107 "seek_data": false, 00:21:39.107 "copy": true, 00:21:39.107 "nvme_iov_md": false 00:21:39.107 }, 00:21:39.107 "memory_domains": [ 00:21:39.107 { 00:21:39.107 "dma_device_id": "system", 00:21:39.107 "dma_device_type": 1 00:21:39.107 }, 00:21:39.107 { 00:21:39.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.107 "dma_device_type": 2 00:21:39.107 } 00:21:39.107 ], 00:21:39.107 "driver_specific": {} 00:21:39.107 } 00:21:39.107 ] 00:21:39.107 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.107 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:39.107 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:39.107 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:39.107 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:39.107 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:39.107 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:39.107 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:39.107 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:39.107 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:39.107 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:39.107 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:39.107 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:39.107 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:39.107 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.107 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.107 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.107 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:39.107 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.107 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:39.107 "name": "Existed_Raid", 00:21:39.107 "uuid": "c144d324-de07-42c6-878f-c14711d2cc1a", 00:21:39.107 "strip_size_kb": 64, 00:21:39.107 "state": "configuring", 00:21:39.107 "raid_level": "raid0", 00:21:39.107 "superblock": true, 00:21:39.107 "num_base_bdevs": 4, 00:21:39.107 "num_base_bdevs_discovered": 2, 00:21:39.107 "num_base_bdevs_operational": 4, 00:21:39.107 "base_bdevs_list": [ 00:21:39.107 { 00:21:39.108 "name": "BaseBdev1", 00:21:39.108 "uuid": "a663a558-513f-4713-8b01-954dbbb8d380", 00:21:39.108 "is_configured": true, 00:21:39.108 "data_offset": 2048, 00:21:39.108 "data_size": 63488 00:21:39.108 }, 00:21:39.108 { 00:21:39.108 "name": "BaseBdev2", 00:21:39.108 "uuid": "95105110-34e8-4ccb-b616-14d78d16e1ed", 00:21:39.108 "is_configured": true, 00:21:39.108 "data_offset": 2048, 00:21:39.108 "data_size": 63488 00:21:39.108 }, 00:21:39.108 { 00:21:39.108 "name": "BaseBdev3", 00:21:39.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.108 "is_configured": false, 00:21:39.108 "data_offset": 0, 00:21:39.108 "data_size": 0 00:21:39.108 }, 00:21:39.108 { 00:21:39.108 "name": "BaseBdev4", 00:21:39.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.108 "is_configured": false, 00:21:39.108 "data_offset": 0, 00:21:39.108 "data_size": 0 00:21:39.108 } 00:21:39.108 ] 00:21:39.108 }' 00:21:39.108 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:39.108 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.370 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:39.370 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.370 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.370 [2024-12-05 12:53:21.848807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:39.370 BaseBdev3 00:21:39.370 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.370 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:21:39.370 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:39.370 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:39.370 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:39.370 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:39.370 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:39.370 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:39.370 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.370 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.370 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.370 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:39.370 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.370 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.370 [ 00:21:39.370 { 00:21:39.370 "name": "BaseBdev3", 00:21:39.370 "aliases": [ 00:21:39.370 "4b957570-966a-418f-9802-3e689d540ed6" 00:21:39.370 ], 00:21:39.370 "product_name": "Malloc disk", 00:21:39.370 "block_size": 512, 00:21:39.370 "num_blocks": 65536, 00:21:39.370 "uuid": "4b957570-966a-418f-9802-3e689d540ed6", 00:21:39.370 "assigned_rate_limits": { 00:21:39.370 "rw_ios_per_sec": 0, 00:21:39.370 "rw_mbytes_per_sec": 0, 00:21:39.370 "r_mbytes_per_sec": 0, 00:21:39.370 "w_mbytes_per_sec": 0 00:21:39.370 }, 00:21:39.370 "claimed": true, 00:21:39.370 "claim_type": "exclusive_write", 00:21:39.370 "zoned": false, 00:21:39.370 "supported_io_types": { 00:21:39.370 "read": true, 00:21:39.370 "write": true, 00:21:39.370 "unmap": true, 00:21:39.370 "flush": true, 00:21:39.370 "reset": true, 00:21:39.370 "nvme_admin": false, 00:21:39.370 "nvme_io": false, 00:21:39.370 "nvme_io_md": false, 00:21:39.370 "write_zeroes": true, 00:21:39.370 "zcopy": true, 00:21:39.370 "get_zone_info": false, 00:21:39.370 "zone_management": false, 00:21:39.370 "zone_append": false, 00:21:39.370 "compare": false, 00:21:39.370 "compare_and_write": false, 00:21:39.370 "abort": true, 00:21:39.370 "seek_hole": false, 00:21:39.370 "seek_data": false, 00:21:39.370 "copy": true, 00:21:39.370 "nvme_iov_md": false 00:21:39.370 }, 00:21:39.370 "memory_domains": [ 00:21:39.370 { 00:21:39.370 "dma_device_id": "system", 00:21:39.370 "dma_device_type": 1 00:21:39.370 }, 00:21:39.370 { 00:21:39.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.370 "dma_device_type": 2 00:21:39.370 } 00:21:39.370 ], 00:21:39.370 "driver_specific": {} 00:21:39.370 } 00:21:39.370 ] 00:21:39.370 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.370 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:39.370 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:39.370 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:39.371 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:39.371 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:39.371 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:39.371 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:39.371 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:39.371 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:39.371 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:39.371 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:39.371 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:39.371 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:39.371 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.371 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:39.371 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.371 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.371 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.371 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:39.371 "name": "Existed_Raid", 00:21:39.371 "uuid": "c144d324-de07-42c6-878f-c14711d2cc1a", 00:21:39.371 "strip_size_kb": 64, 00:21:39.371 "state": "configuring", 00:21:39.371 "raid_level": "raid0", 00:21:39.371 "superblock": true, 00:21:39.371 "num_base_bdevs": 4, 00:21:39.371 "num_base_bdevs_discovered": 3, 00:21:39.371 "num_base_bdevs_operational": 4, 00:21:39.371 "base_bdevs_list": [ 00:21:39.371 { 00:21:39.371 "name": "BaseBdev1", 00:21:39.371 "uuid": "a663a558-513f-4713-8b01-954dbbb8d380", 00:21:39.371 "is_configured": true, 00:21:39.371 "data_offset": 2048, 00:21:39.371 "data_size": 63488 00:21:39.371 }, 00:21:39.371 { 00:21:39.371 "name": "BaseBdev2", 00:21:39.371 "uuid": "95105110-34e8-4ccb-b616-14d78d16e1ed", 00:21:39.371 "is_configured": true, 00:21:39.371 "data_offset": 2048, 00:21:39.371 "data_size": 63488 00:21:39.371 }, 00:21:39.371 { 00:21:39.371 "name": "BaseBdev3", 00:21:39.371 "uuid": "4b957570-966a-418f-9802-3e689d540ed6", 00:21:39.371 "is_configured": true, 00:21:39.371 "data_offset": 2048, 00:21:39.371 "data_size": 63488 00:21:39.371 }, 00:21:39.371 { 00:21:39.371 "name": "BaseBdev4", 00:21:39.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.371 "is_configured": false, 00:21:39.371 "data_offset": 0, 00:21:39.371 "data_size": 0 00:21:39.371 } 00:21:39.371 ] 00:21:39.371 }' 00:21:39.371 12:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:39.371 12:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.630 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:39.630 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.630 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.890 [2024-12-05 12:53:22.227411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:39.890 [2024-12-05 12:53:22.227661] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:39.890 [2024-12-05 12:53:22.227683] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:39.890 [2024-12-05 12:53:22.227942] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:39.890 BaseBdev4 00:21:39.890 [2024-12-05 12:53:22.228082] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:39.890 [2024-12-05 12:53:22.228093] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:39.891 [2024-12-05 12:53:22.228219] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:39.891 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.891 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:21:39.891 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:21:39.891 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:39.891 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:39.891 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:39.891 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:39.891 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:39.891 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.891 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.891 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.891 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:39.891 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.891 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.891 [ 00:21:39.891 { 00:21:39.891 "name": "BaseBdev4", 00:21:39.891 "aliases": [ 00:21:39.891 "a540540e-a003-41bf-8c45-d7b1414f2eb7" 00:21:39.891 ], 00:21:39.891 "product_name": "Malloc disk", 00:21:39.891 "block_size": 512, 00:21:39.891 "num_blocks": 65536, 00:21:39.891 "uuid": "a540540e-a003-41bf-8c45-d7b1414f2eb7", 00:21:39.891 "assigned_rate_limits": { 00:21:39.891 "rw_ios_per_sec": 0, 00:21:39.891 "rw_mbytes_per_sec": 0, 00:21:39.891 "r_mbytes_per_sec": 0, 00:21:39.891 "w_mbytes_per_sec": 0 00:21:39.891 }, 00:21:39.891 "claimed": true, 00:21:39.891 "claim_type": "exclusive_write", 00:21:39.891 "zoned": false, 00:21:39.891 "supported_io_types": { 00:21:39.891 "read": true, 00:21:39.891 "write": true, 00:21:39.891 "unmap": true, 00:21:39.891 "flush": true, 00:21:39.891 "reset": true, 00:21:39.891 "nvme_admin": false, 00:21:39.891 "nvme_io": false, 00:21:39.891 "nvme_io_md": false, 00:21:39.891 "write_zeroes": true, 00:21:39.891 "zcopy": true, 00:21:39.891 "get_zone_info": false, 00:21:39.891 "zone_management": false, 00:21:39.891 "zone_append": false, 00:21:39.891 "compare": false, 00:21:39.891 "compare_and_write": false, 00:21:39.891 "abort": true, 00:21:39.891 "seek_hole": false, 00:21:39.891 "seek_data": false, 00:21:39.891 "copy": true, 00:21:39.891 "nvme_iov_md": false 00:21:39.891 }, 00:21:39.891 "memory_domains": [ 00:21:39.891 { 00:21:39.891 "dma_device_id": "system", 00:21:39.891 "dma_device_type": 1 00:21:39.891 }, 00:21:39.891 { 00:21:39.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.891 "dma_device_type": 2 00:21:39.891 } 00:21:39.891 ], 00:21:39.891 "driver_specific": {} 00:21:39.891 } 00:21:39.891 ] 00:21:39.891 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.891 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:39.891 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:39.891 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:39.891 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:21:39.891 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:39.891 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:39.891 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:39.891 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:39.891 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:39.891 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:39.891 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:39.891 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:39.891 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:39.891 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.891 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.891 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.891 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:39.891 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.891 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:39.891 "name": "Existed_Raid", 00:21:39.891 "uuid": "c144d324-de07-42c6-878f-c14711d2cc1a", 00:21:39.891 "strip_size_kb": 64, 00:21:39.891 "state": "online", 00:21:39.891 "raid_level": "raid0", 00:21:39.891 "superblock": true, 00:21:39.891 "num_base_bdevs": 4, 00:21:39.891 "num_base_bdevs_discovered": 4, 00:21:39.891 "num_base_bdevs_operational": 4, 00:21:39.891 "base_bdevs_list": [ 00:21:39.891 { 00:21:39.891 "name": "BaseBdev1", 00:21:39.891 "uuid": "a663a558-513f-4713-8b01-954dbbb8d380", 00:21:39.891 "is_configured": true, 00:21:39.891 "data_offset": 2048, 00:21:39.891 "data_size": 63488 00:21:39.891 }, 00:21:39.891 { 00:21:39.891 "name": "BaseBdev2", 00:21:39.891 "uuid": "95105110-34e8-4ccb-b616-14d78d16e1ed", 00:21:39.891 "is_configured": true, 00:21:39.891 "data_offset": 2048, 00:21:39.891 "data_size": 63488 00:21:39.891 }, 00:21:39.891 { 00:21:39.891 "name": "BaseBdev3", 00:21:39.891 "uuid": "4b957570-966a-418f-9802-3e689d540ed6", 00:21:39.891 "is_configured": true, 00:21:39.891 "data_offset": 2048, 00:21:39.891 "data_size": 63488 00:21:39.891 }, 00:21:39.891 { 00:21:39.891 "name": "BaseBdev4", 00:21:39.891 "uuid": "a540540e-a003-41bf-8c45-d7b1414f2eb7", 00:21:39.891 "is_configured": true, 00:21:39.891 "data_offset": 2048, 00:21:39.891 "data_size": 63488 00:21:39.891 } 00:21:39.891 ] 00:21:39.891 }' 00:21:39.891 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:39.891 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.184 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:40.184 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:40.184 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:40.184 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:40.184 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:21:40.184 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:40.184 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:40.184 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.184 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.184 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:40.184 [2024-12-05 12:53:22.575921] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:40.184 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.184 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:40.184 "name": "Existed_Raid", 00:21:40.184 "aliases": [ 00:21:40.184 "c144d324-de07-42c6-878f-c14711d2cc1a" 00:21:40.184 ], 00:21:40.184 "product_name": "Raid Volume", 00:21:40.184 "block_size": 512, 00:21:40.184 "num_blocks": 253952, 00:21:40.184 "uuid": "c144d324-de07-42c6-878f-c14711d2cc1a", 00:21:40.184 "assigned_rate_limits": { 00:21:40.184 "rw_ios_per_sec": 0, 00:21:40.184 "rw_mbytes_per_sec": 0, 00:21:40.184 "r_mbytes_per_sec": 0, 00:21:40.184 "w_mbytes_per_sec": 0 00:21:40.184 }, 00:21:40.184 "claimed": false, 00:21:40.184 "zoned": false, 00:21:40.184 "supported_io_types": { 00:21:40.184 "read": true, 00:21:40.184 "write": true, 00:21:40.184 "unmap": true, 00:21:40.184 "flush": true, 00:21:40.184 "reset": true, 00:21:40.184 "nvme_admin": false, 00:21:40.184 "nvme_io": false, 00:21:40.184 "nvme_io_md": false, 00:21:40.184 "write_zeroes": true, 00:21:40.184 "zcopy": false, 00:21:40.184 "get_zone_info": false, 00:21:40.184 "zone_management": false, 00:21:40.184 "zone_append": false, 00:21:40.184 "compare": false, 00:21:40.184 "compare_and_write": false, 00:21:40.184 "abort": false, 00:21:40.184 "seek_hole": false, 00:21:40.184 "seek_data": false, 00:21:40.184 "copy": false, 00:21:40.184 "nvme_iov_md": false 00:21:40.184 }, 00:21:40.184 "memory_domains": [ 00:21:40.184 { 00:21:40.184 "dma_device_id": "system", 00:21:40.184 "dma_device_type": 1 00:21:40.184 }, 00:21:40.184 { 00:21:40.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:40.184 "dma_device_type": 2 00:21:40.184 }, 00:21:40.184 { 00:21:40.184 "dma_device_id": "system", 00:21:40.184 "dma_device_type": 1 00:21:40.184 }, 00:21:40.184 { 00:21:40.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:40.184 "dma_device_type": 2 00:21:40.184 }, 00:21:40.184 { 00:21:40.185 "dma_device_id": "system", 00:21:40.185 "dma_device_type": 1 00:21:40.185 }, 00:21:40.185 { 00:21:40.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:40.185 "dma_device_type": 2 00:21:40.185 }, 00:21:40.185 { 00:21:40.185 "dma_device_id": "system", 00:21:40.185 "dma_device_type": 1 00:21:40.185 }, 00:21:40.185 { 00:21:40.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:40.185 "dma_device_type": 2 00:21:40.185 } 00:21:40.185 ], 00:21:40.185 "driver_specific": { 00:21:40.185 "raid": { 00:21:40.185 "uuid": "c144d324-de07-42c6-878f-c14711d2cc1a", 00:21:40.185 "strip_size_kb": 64, 00:21:40.185 "state": "online", 00:21:40.185 "raid_level": "raid0", 00:21:40.185 "superblock": true, 00:21:40.185 "num_base_bdevs": 4, 00:21:40.185 "num_base_bdevs_discovered": 4, 00:21:40.185 "num_base_bdevs_operational": 4, 00:21:40.185 "base_bdevs_list": [ 00:21:40.185 { 00:21:40.185 "name": "BaseBdev1", 00:21:40.185 "uuid": "a663a558-513f-4713-8b01-954dbbb8d380", 00:21:40.185 "is_configured": true, 00:21:40.185 "data_offset": 2048, 00:21:40.185 "data_size": 63488 00:21:40.185 }, 00:21:40.185 { 00:21:40.185 "name": "BaseBdev2", 00:21:40.185 "uuid": "95105110-34e8-4ccb-b616-14d78d16e1ed", 00:21:40.185 "is_configured": true, 00:21:40.185 "data_offset": 2048, 00:21:40.185 "data_size": 63488 00:21:40.185 }, 00:21:40.185 { 00:21:40.185 "name": "BaseBdev3", 00:21:40.185 "uuid": "4b957570-966a-418f-9802-3e689d540ed6", 00:21:40.185 "is_configured": true, 00:21:40.185 "data_offset": 2048, 00:21:40.185 "data_size": 63488 00:21:40.185 }, 00:21:40.185 { 00:21:40.185 "name": "BaseBdev4", 00:21:40.185 "uuid": "a540540e-a003-41bf-8c45-d7b1414f2eb7", 00:21:40.185 "is_configured": true, 00:21:40.185 "data_offset": 2048, 00:21:40.185 "data_size": 63488 00:21:40.185 } 00:21:40.185 ] 00:21:40.185 } 00:21:40.185 } 00:21:40.185 }' 00:21:40.185 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:40.185 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:40.185 BaseBdev2 00:21:40.185 BaseBdev3 00:21:40.185 BaseBdev4' 00:21:40.185 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:40.185 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:40.185 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:40.185 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:40.185 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:40.185 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.185 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.185 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.185 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:40.185 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:40.185 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:40.185 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:40.185 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:40.185 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.185 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.185 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.185 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:40.185 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:40.185 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:40.185 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:40.185 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.185 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.185 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:40.185 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.445 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:40.445 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:40.445 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:40.445 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:40.445 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:21:40.445 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.445 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.445 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.445 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:40.445 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:40.445 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:40.445 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.445 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.445 [2024-12-05 12:53:22.811652] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:40.445 [2024-12-05 12:53:22.811690] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:40.445 [2024-12-05 12:53:22.811739] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:40.445 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.445 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:40.445 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:21:40.445 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:40.445 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:21:40.445 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:21:40.445 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:21:40.445 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:40.445 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:21:40.445 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:40.445 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:40.445 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:40.445 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:40.445 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:40.445 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:40.445 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:40.445 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.445 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:40.445 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.445 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.445 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.445 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:40.445 "name": "Existed_Raid", 00:21:40.445 "uuid": "c144d324-de07-42c6-878f-c14711d2cc1a", 00:21:40.445 "strip_size_kb": 64, 00:21:40.445 "state": "offline", 00:21:40.445 "raid_level": "raid0", 00:21:40.445 "superblock": true, 00:21:40.445 "num_base_bdevs": 4, 00:21:40.445 "num_base_bdevs_discovered": 3, 00:21:40.445 "num_base_bdevs_operational": 3, 00:21:40.445 "base_bdevs_list": [ 00:21:40.445 { 00:21:40.445 "name": null, 00:21:40.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.445 "is_configured": false, 00:21:40.445 "data_offset": 0, 00:21:40.445 "data_size": 63488 00:21:40.445 }, 00:21:40.445 { 00:21:40.445 "name": "BaseBdev2", 00:21:40.445 "uuid": "95105110-34e8-4ccb-b616-14d78d16e1ed", 00:21:40.445 "is_configured": true, 00:21:40.445 "data_offset": 2048, 00:21:40.445 "data_size": 63488 00:21:40.445 }, 00:21:40.445 { 00:21:40.445 "name": "BaseBdev3", 00:21:40.445 "uuid": "4b957570-966a-418f-9802-3e689d540ed6", 00:21:40.445 "is_configured": true, 00:21:40.445 "data_offset": 2048, 00:21:40.445 "data_size": 63488 00:21:40.445 }, 00:21:40.445 { 00:21:40.445 "name": "BaseBdev4", 00:21:40.445 "uuid": "a540540e-a003-41bf-8c45-d7b1414f2eb7", 00:21:40.445 "is_configured": true, 00:21:40.445 "data_offset": 2048, 00:21:40.445 "data_size": 63488 00:21:40.445 } 00:21:40.445 ] 00:21:40.445 }' 00:21:40.445 12:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:40.445 12:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.706 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:40.706 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:40.706 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.706 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.706 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.706 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:40.706 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.706 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:40.706 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:40.707 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:40.707 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.707 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.707 [2024-12-05 12:53:23.226907] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:40.707 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.707 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:40.707 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.967 [2024-12-05 12:53:23.321912] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.967 [2024-12-05 12:53:23.425487] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:40.967 [2024-12-05 12:53:23.425564] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.967 BaseBdev2 00:21:40.967 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.227 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:21:41.227 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:41.227 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:41.227 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:41.227 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:41.227 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:41.227 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:41.227 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.227 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.227 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.227 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:41.227 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.227 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.227 [ 00:21:41.227 { 00:21:41.227 "name": "BaseBdev2", 00:21:41.227 "aliases": [ 00:21:41.227 "b1ed0cb3-e468-4753-a18d-068139f1953f" 00:21:41.227 ], 00:21:41.227 "product_name": "Malloc disk", 00:21:41.227 "block_size": 512, 00:21:41.227 "num_blocks": 65536, 00:21:41.227 "uuid": "b1ed0cb3-e468-4753-a18d-068139f1953f", 00:21:41.227 "assigned_rate_limits": { 00:21:41.227 "rw_ios_per_sec": 0, 00:21:41.227 "rw_mbytes_per_sec": 0, 00:21:41.227 "r_mbytes_per_sec": 0, 00:21:41.227 "w_mbytes_per_sec": 0 00:21:41.228 }, 00:21:41.228 "claimed": false, 00:21:41.228 "zoned": false, 00:21:41.228 "supported_io_types": { 00:21:41.228 "read": true, 00:21:41.228 "write": true, 00:21:41.228 "unmap": true, 00:21:41.228 "flush": true, 00:21:41.228 "reset": true, 00:21:41.228 "nvme_admin": false, 00:21:41.228 "nvme_io": false, 00:21:41.228 "nvme_io_md": false, 00:21:41.228 "write_zeroes": true, 00:21:41.228 "zcopy": true, 00:21:41.228 "get_zone_info": false, 00:21:41.228 "zone_management": false, 00:21:41.228 "zone_append": false, 00:21:41.228 "compare": false, 00:21:41.228 "compare_and_write": false, 00:21:41.228 "abort": true, 00:21:41.228 "seek_hole": false, 00:21:41.228 "seek_data": false, 00:21:41.228 "copy": true, 00:21:41.228 "nvme_iov_md": false 00:21:41.228 }, 00:21:41.228 "memory_domains": [ 00:21:41.228 { 00:21:41.228 "dma_device_id": "system", 00:21:41.228 "dma_device_type": 1 00:21:41.228 }, 00:21:41.228 { 00:21:41.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:41.228 "dma_device_type": 2 00:21:41.228 } 00:21:41.228 ], 00:21:41.228 "driver_specific": {} 00:21:41.228 } 00:21:41.228 ] 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.228 BaseBdev3 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.228 [ 00:21:41.228 { 00:21:41.228 "name": "BaseBdev3", 00:21:41.228 "aliases": [ 00:21:41.228 "e6e66902-7334-443f-9283-97825500d4ec" 00:21:41.228 ], 00:21:41.228 "product_name": "Malloc disk", 00:21:41.228 "block_size": 512, 00:21:41.228 "num_blocks": 65536, 00:21:41.228 "uuid": "e6e66902-7334-443f-9283-97825500d4ec", 00:21:41.228 "assigned_rate_limits": { 00:21:41.228 "rw_ios_per_sec": 0, 00:21:41.228 "rw_mbytes_per_sec": 0, 00:21:41.228 "r_mbytes_per_sec": 0, 00:21:41.228 "w_mbytes_per_sec": 0 00:21:41.228 }, 00:21:41.228 "claimed": false, 00:21:41.228 "zoned": false, 00:21:41.228 "supported_io_types": { 00:21:41.228 "read": true, 00:21:41.228 "write": true, 00:21:41.228 "unmap": true, 00:21:41.228 "flush": true, 00:21:41.228 "reset": true, 00:21:41.228 "nvme_admin": false, 00:21:41.228 "nvme_io": false, 00:21:41.228 "nvme_io_md": false, 00:21:41.228 "write_zeroes": true, 00:21:41.228 "zcopy": true, 00:21:41.228 "get_zone_info": false, 00:21:41.228 "zone_management": false, 00:21:41.228 "zone_append": false, 00:21:41.228 "compare": false, 00:21:41.228 "compare_and_write": false, 00:21:41.228 "abort": true, 00:21:41.228 "seek_hole": false, 00:21:41.228 "seek_data": false, 00:21:41.228 "copy": true, 00:21:41.228 "nvme_iov_md": false 00:21:41.228 }, 00:21:41.228 "memory_domains": [ 00:21:41.228 { 00:21:41.228 "dma_device_id": "system", 00:21:41.228 "dma_device_type": 1 00:21:41.228 }, 00:21:41.228 { 00:21:41.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:41.228 "dma_device_type": 2 00:21:41.228 } 00:21:41.228 ], 00:21:41.228 "driver_specific": {} 00:21:41.228 } 00:21:41.228 ] 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.228 BaseBdev4 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.228 [ 00:21:41.228 { 00:21:41.228 "name": "BaseBdev4", 00:21:41.228 "aliases": [ 00:21:41.228 "42f16d8d-fde6-451f-89f5-b369c90000ce" 00:21:41.228 ], 00:21:41.228 "product_name": "Malloc disk", 00:21:41.228 "block_size": 512, 00:21:41.228 "num_blocks": 65536, 00:21:41.228 "uuid": "42f16d8d-fde6-451f-89f5-b369c90000ce", 00:21:41.228 "assigned_rate_limits": { 00:21:41.228 "rw_ios_per_sec": 0, 00:21:41.228 "rw_mbytes_per_sec": 0, 00:21:41.228 "r_mbytes_per_sec": 0, 00:21:41.228 "w_mbytes_per_sec": 0 00:21:41.228 }, 00:21:41.228 "claimed": false, 00:21:41.228 "zoned": false, 00:21:41.228 "supported_io_types": { 00:21:41.228 "read": true, 00:21:41.228 "write": true, 00:21:41.228 "unmap": true, 00:21:41.228 "flush": true, 00:21:41.228 "reset": true, 00:21:41.228 "nvme_admin": false, 00:21:41.228 "nvme_io": false, 00:21:41.228 "nvme_io_md": false, 00:21:41.228 "write_zeroes": true, 00:21:41.228 "zcopy": true, 00:21:41.228 "get_zone_info": false, 00:21:41.228 "zone_management": false, 00:21:41.228 "zone_append": false, 00:21:41.228 "compare": false, 00:21:41.228 "compare_and_write": false, 00:21:41.228 "abort": true, 00:21:41.228 "seek_hole": false, 00:21:41.228 "seek_data": false, 00:21:41.228 "copy": true, 00:21:41.228 "nvme_iov_md": false 00:21:41.228 }, 00:21:41.228 "memory_domains": [ 00:21:41.228 { 00:21:41.228 "dma_device_id": "system", 00:21:41.228 "dma_device_type": 1 00:21:41.228 }, 00:21:41.228 { 00:21:41.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:41.228 "dma_device_type": 2 00:21:41.228 } 00:21:41.228 ], 00:21:41.228 "driver_specific": {} 00:21:41.228 } 00:21:41.228 ] 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.228 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.228 [2024-12-05 12:53:23.704573] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:41.228 [2024-12-05 12:53:23.704619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:41.229 [2024-12-05 12:53:23.704641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:41.229 [2024-12-05 12:53:23.706482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:41.229 [2024-12-05 12:53:23.706545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:41.229 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.229 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:41.229 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:41.229 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:41.229 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:41.229 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:41.229 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:41.229 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:41.229 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:41.229 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:41.229 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:41.229 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.229 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:41.229 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.229 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.229 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.229 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:41.229 "name": "Existed_Raid", 00:21:41.229 "uuid": "5d46c9ff-d8c8-4630-bbe0-46bab56c7f97", 00:21:41.229 "strip_size_kb": 64, 00:21:41.229 "state": "configuring", 00:21:41.229 "raid_level": "raid0", 00:21:41.229 "superblock": true, 00:21:41.229 "num_base_bdevs": 4, 00:21:41.229 "num_base_bdevs_discovered": 3, 00:21:41.229 "num_base_bdevs_operational": 4, 00:21:41.229 "base_bdevs_list": [ 00:21:41.229 { 00:21:41.229 "name": "BaseBdev1", 00:21:41.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.229 "is_configured": false, 00:21:41.229 "data_offset": 0, 00:21:41.229 "data_size": 0 00:21:41.229 }, 00:21:41.229 { 00:21:41.229 "name": "BaseBdev2", 00:21:41.229 "uuid": "b1ed0cb3-e468-4753-a18d-068139f1953f", 00:21:41.229 "is_configured": true, 00:21:41.229 "data_offset": 2048, 00:21:41.229 "data_size": 63488 00:21:41.229 }, 00:21:41.229 { 00:21:41.229 "name": "BaseBdev3", 00:21:41.229 "uuid": "e6e66902-7334-443f-9283-97825500d4ec", 00:21:41.229 "is_configured": true, 00:21:41.229 "data_offset": 2048, 00:21:41.229 "data_size": 63488 00:21:41.229 }, 00:21:41.229 { 00:21:41.229 "name": "BaseBdev4", 00:21:41.229 "uuid": "42f16d8d-fde6-451f-89f5-b369c90000ce", 00:21:41.229 "is_configured": true, 00:21:41.229 "data_offset": 2048, 00:21:41.229 "data_size": 63488 00:21:41.229 } 00:21:41.229 ] 00:21:41.229 }' 00:21:41.229 12:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:41.229 12:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.488 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:41.488 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.488 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.488 [2024-12-05 12:53:24.008620] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:41.488 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.488 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:41.488 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:41.488 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:41.488 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:41.488 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:41.488 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:41.488 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:41.488 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:41.488 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:41.488 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:41.488 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:41.488 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.488 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.488 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.488 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.488 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:41.488 "name": "Existed_Raid", 00:21:41.488 "uuid": "5d46c9ff-d8c8-4630-bbe0-46bab56c7f97", 00:21:41.488 "strip_size_kb": 64, 00:21:41.488 "state": "configuring", 00:21:41.488 "raid_level": "raid0", 00:21:41.488 "superblock": true, 00:21:41.488 "num_base_bdevs": 4, 00:21:41.488 "num_base_bdevs_discovered": 2, 00:21:41.488 "num_base_bdevs_operational": 4, 00:21:41.488 "base_bdevs_list": [ 00:21:41.488 { 00:21:41.488 "name": "BaseBdev1", 00:21:41.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.488 "is_configured": false, 00:21:41.488 "data_offset": 0, 00:21:41.488 "data_size": 0 00:21:41.488 }, 00:21:41.488 { 00:21:41.489 "name": null, 00:21:41.489 "uuid": "b1ed0cb3-e468-4753-a18d-068139f1953f", 00:21:41.489 "is_configured": false, 00:21:41.489 "data_offset": 0, 00:21:41.489 "data_size": 63488 00:21:41.489 }, 00:21:41.489 { 00:21:41.489 "name": "BaseBdev3", 00:21:41.489 "uuid": "e6e66902-7334-443f-9283-97825500d4ec", 00:21:41.489 "is_configured": true, 00:21:41.489 "data_offset": 2048, 00:21:41.489 "data_size": 63488 00:21:41.489 }, 00:21:41.489 { 00:21:41.489 "name": "BaseBdev4", 00:21:41.489 "uuid": "42f16d8d-fde6-451f-89f5-b369c90000ce", 00:21:41.489 "is_configured": true, 00:21:41.489 "data_offset": 2048, 00:21:41.489 "data_size": 63488 00:21:41.489 } 00:21:41.489 ] 00:21:41.489 }' 00:21:41.489 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:41.489 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.748 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.748 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.748 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.748 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:42.009 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.009 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:21:42.009 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:42.009 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.009 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.009 [2024-12-05 12:53:24.380794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:42.009 BaseBdev1 00:21:42.009 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.009 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:21:42.009 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:42.009 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:42.009 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:42.009 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:42.009 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:42.009 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:42.009 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.009 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.009 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.009 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:42.009 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.009 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.009 [ 00:21:42.009 { 00:21:42.009 "name": "BaseBdev1", 00:21:42.009 "aliases": [ 00:21:42.009 "7faee48a-4ed4-4223-a16d-7cfdb8504048" 00:21:42.009 ], 00:21:42.009 "product_name": "Malloc disk", 00:21:42.009 "block_size": 512, 00:21:42.009 "num_blocks": 65536, 00:21:42.009 "uuid": "7faee48a-4ed4-4223-a16d-7cfdb8504048", 00:21:42.009 "assigned_rate_limits": { 00:21:42.009 "rw_ios_per_sec": 0, 00:21:42.009 "rw_mbytes_per_sec": 0, 00:21:42.009 "r_mbytes_per_sec": 0, 00:21:42.009 "w_mbytes_per_sec": 0 00:21:42.009 }, 00:21:42.009 "claimed": true, 00:21:42.009 "claim_type": "exclusive_write", 00:21:42.009 "zoned": false, 00:21:42.009 "supported_io_types": { 00:21:42.009 "read": true, 00:21:42.009 "write": true, 00:21:42.009 "unmap": true, 00:21:42.009 "flush": true, 00:21:42.009 "reset": true, 00:21:42.009 "nvme_admin": false, 00:21:42.009 "nvme_io": false, 00:21:42.009 "nvme_io_md": false, 00:21:42.009 "write_zeroes": true, 00:21:42.009 "zcopy": true, 00:21:42.009 "get_zone_info": false, 00:21:42.009 "zone_management": false, 00:21:42.009 "zone_append": false, 00:21:42.009 "compare": false, 00:21:42.009 "compare_and_write": false, 00:21:42.009 "abort": true, 00:21:42.009 "seek_hole": false, 00:21:42.009 "seek_data": false, 00:21:42.009 "copy": true, 00:21:42.009 "nvme_iov_md": false 00:21:42.009 }, 00:21:42.009 "memory_domains": [ 00:21:42.010 { 00:21:42.010 "dma_device_id": "system", 00:21:42.010 "dma_device_type": 1 00:21:42.010 }, 00:21:42.010 { 00:21:42.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:42.010 "dma_device_type": 2 00:21:42.010 } 00:21:42.010 ], 00:21:42.010 "driver_specific": {} 00:21:42.010 } 00:21:42.010 ] 00:21:42.010 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.010 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:42.010 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:42.010 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:42.010 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:42.010 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:42.010 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:42.010 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:42.010 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:42.010 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:42.010 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:42.010 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:42.010 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.010 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.010 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.010 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:42.010 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.010 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:42.010 "name": "Existed_Raid", 00:21:42.010 "uuid": "5d46c9ff-d8c8-4630-bbe0-46bab56c7f97", 00:21:42.010 "strip_size_kb": 64, 00:21:42.010 "state": "configuring", 00:21:42.010 "raid_level": "raid0", 00:21:42.010 "superblock": true, 00:21:42.010 "num_base_bdevs": 4, 00:21:42.010 "num_base_bdevs_discovered": 3, 00:21:42.010 "num_base_bdevs_operational": 4, 00:21:42.010 "base_bdevs_list": [ 00:21:42.010 { 00:21:42.010 "name": "BaseBdev1", 00:21:42.010 "uuid": "7faee48a-4ed4-4223-a16d-7cfdb8504048", 00:21:42.010 "is_configured": true, 00:21:42.010 "data_offset": 2048, 00:21:42.010 "data_size": 63488 00:21:42.010 }, 00:21:42.010 { 00:21:42.010 "name": null, 00:21:42.010 "uuid": "b1ed0cb3-e468-4753-a18d-068139f1953f", 00:21:42.010 "is_configured": false, 00:21:42.010 "data_offset": 0, 00:21:42.010 "data_size": 63488 00:21:42.010 }, 00:21:42.010 { 00:21:42.010 "name": "BaseBdev3", 00:21:42.010 "uuid": "e6e66902-7334-443f-9283-97825500d4ec", 00:21:42.010 "is_configured": true, 00:21:42.010 "data_offset": 2048, 00:21:42.010 "data_size": 63488 00:21:42.010 }, 00:21:42.010 { 00:21:42.010 "name": "BaseBdev4", 00:21:42.010 "uuid": "42f16d8d-fde6-451f-89f5-b369c90000ce", 00:21:42.010 "is_configured": true, 00:21:42.010 "data_offset": 2048, 00:21:42.010 "data_size": 63488 00:21:42.010 } 00:21:42.010 ] 00:21:42.010 }' 00:21:42.010 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:42.010 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.272 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:42.273 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.273 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.273 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.273 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.273 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:21:42.273 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:21:42.273 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.273 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.273 [2024-12-05 12:53:24.744919] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:42.273 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.273 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:42.273 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:42.273 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:42.273 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:42.273 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:42.273 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:42.273 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:42.273 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:42.273 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:42.273 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:42.273 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.273 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:42.273 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.273 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.273 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.273 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:42.273 "name": "Existed_Raid", 00:21:42.273 "uuid": "5d46c9ff-d8c8-4630-bbe0-46bab56c7f97", 00:21:42.273 "strip_size_kb": 64, 00:21:42.273 "state": "configuring", 00:21:42.273 "raid_level": "raid0", 00:21:42.273 "superblock": true, 00:21:42.273 "num_base_bdevs": 4, 00:21:42.273 "num_base_bdevs_discovered": 2, 00:21:42.273 "num_base_bdevs_operational": 4, 00:21:42.273 "base_bdevs_list": [ 00:21:42.273 { 00:21:42.273 "name": "BaseBdev1", 00:21:42.273 "uuid": "7faee48a-4ed4-4223-a16d-7cfdb8504048", 00:21:42.273 "is_configured": true, 00:21:42.273 "data_offset": 2048, 00:21:42.273 "data_size": 63488 00:21:42.273 }, 00:21:42.273 { 00:21:42.273 "name": null, 00:21:42.273 "uuid": "b1ed0cb3-e468-4753-a18d-068139f1953f", 00:21:42.273 "is_configured": false, 00:21:42.273 "data_offset": 0, 00:21:42.273 "data_size": 63488 00:21:42.273 }, 00:21:42.273 { 00:21:42.273 "name": null, 00:21:42.273 "uuid": "e6e66902-7334-443f-9283-97825500d4ec", 00:21:42.273 "is_configured": false, 00:21:42.273 "data_offset": 0, 00:21:42.273 "data_size": 63488 00:21:42.273 }, 00:21:42.273 { 00:21:42.273 "name": "BaseBdev4", 00:21:42.273 "uuid": "42f16d8d-fde6-451f-89f5-b369c90000ce", 00:21:42.273 "is_configured": true, 00:21:42.273 "data_offset": 2048, 00:21:42.273 "data_size": 63488 00:21:42.273 } 00:21:42.273 ] 00:21:42.273 }' 00:21:42.273 12:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:42.273 12:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.533 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.533 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:42.533 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.533 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.533 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.533 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:21:42.533 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:42.533 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.533 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.533 [2024-12-05 12:53:25.084981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:42.533 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.533 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:42.533 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:42.533 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:42.533 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:42.533 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:42.533 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:42.533 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:42.533 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:42.533 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:42.533 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:42.533 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:42.533 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.533 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.533 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.533 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.794 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:42.794 "name": "Existed_Raid", 00:21:42.794 "uuid": "5d46c9ff-d8c8-4630-bbe0-46bab56c7f97", 00:21:42.794 "strip_size_kb": 64, 00:21:42.794 "state": "configuring", 00:21:42.794 "raid_level": "raid0", 00:21:42.794 "superblock": true, 00:21:42.794 "num_base_bdevs": 4, 00:21:42.794 "num_base_bdevs_discovered": 3, 00:21:42.794 "num_base_bdevs_operational": 4, 00:21:42.794 "base_bdevs_list": [ 00:21:42.794 { 00:21:42.794 "name": "BaseBdev1", 00:21:42.794 "uuid": "7faee48a-4ed4-4223-a16d-7cfdb8504048", 00:21:42.794 "is_configured": true, 00:21:42.794 "data_offset": 2048, 00:21:42.794 "data_size": 63488 00:21:42.794 }, 00:21:42.794 { 00:21:42.794 "name": null, 00:21:42.794 "uuid": "b1ed0cb3-e468-4753-a18d-068139f1953f", 00:21:42.794 "is_configured": false, 00:21:42.794 "data_offset": 0, 00:21:42.794 "data_size": 63488 00:21:42.794 }, 00:21:42.794 { 00:21:42.794 "name": "BaseBdev3", 00:21:42.794 "uuid": "e6e66902-7334-443f-9283-97825500d4ec", 00:21:42.794 "is_configured": true, 00:21:42.794 "data_offset": 2048, 00:21:42.794 "data_size": 63488 00:21:42.794 }, 00:21:42.794 { 00:21:42.794 "name": "BaseBdev4", 00:21:42.794 "uuid": "42f16d8d-fde6-451f-89f5-b369c90000ce", 00:21:42.794 "is_configured": true, 00:21:42.794 "data_offset": 2048, 00:21:42.794 "data_size": 63488 00:21:42.794 } 00:21:42.794 ] 00:21:42.794 }' 00:21:42.794 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:42.794 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.054 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.054 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.054 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.054 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:43.054 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.054 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:21:43.054 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:43.054 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.054 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.054 [2024-12-05 12:53:25.469089] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:43.054 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.054 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:43.054 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:43.054 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:43.054 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:43.054 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:43.054 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:43.054 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:43.054 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:43.054 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:43.054 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:43.054 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.054 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.054 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.054 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:43.054 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.054 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:43.054 "name": "Existed_Raid", 00:21:43.054 "uuid": "5d46c9ff-d8c8-4630-bbe0-46bab56c7f97", 00:21:43.054 "strip_size_kb": 64, 00:21:43.054 "state": "configuring", 00:21:43.054 "raid_level": "raid0", 00:21:43.054 "superblock": true, 00:21:43.054 "num_base_bdevs": 4, 00:21:43.054 "num_base_bdevs_discovered": 2, 00:21:43.054 "num_base_bdevs_operational": 4, 00:21:43.054 "base_bdevs_list": [ 00:21:43.054 { 00:21:43.054 "name": null, 00:21:43.054 "uuid": "7faee48a-4ed4-4223-a16d-7cfdb8504048", 00:21:43.054 "is_configured": false, 00:21:43.054 "data_offset": 0, 00:21:43.054 "data_size": 63488 00:21:43.054 }, 00:21:43.054 { 00:21:43.054 "name": null, 00:21:43.054 "uuid": "b1ed0cb3-e468-4753-a18d-068139f1953f", 00:21:43.054 "is_configured": false, 00:21:43.054 "data_offset": 0, 00:21:43.054 "data_size": 63488 00:21:43.054 }, 00:21:43.054 { 00:21:43.054 "name": "BaseBdev3", 00:21:43.054 "uuid": "e6e66902-7334-443f-9283-97825500d4ec", 00:21:43.054 "is_configured": true, 00:21:43.054 "data_offset": 2048, 00:21:43.054 "data_size": 63488 00:21:43.054 }, 00:21:43.054 { 00:21:43.054 "name": "BaseBdev4", 00:21:43.054 "uuid": "42f16d8d-fde6-451f-89f5-b369c90000ce", 00:21:43.054 "is_configured": true, 00:21:43.054 "data_offset": 2048, 00:21:43.054 "data_size": 63488 00:21:43.054 } 00:21:43.054 ] 00:21:43.054 }' 00:21:43.054 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:43.054 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.396 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.396 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.396 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.396 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:43.396 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.396 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:21:43.396 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:43.396 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.396 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.396 [2024-12-05 12:53:25.855885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:43.396 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.396 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:43.396 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:43.396 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:43.396 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:43.396 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:43.396 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:43.396 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:43.396 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:43.396 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:43.396 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:43.396 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:43.396 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.396 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.396 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.396 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.396 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:43.396 "name": "Existed_Raid", 00:21:43.396 "uuid": "5d46c9ff-d8c8-4630-bbe0-46bab56c7f97", 00:21:43.396 "strip_size_kb": 64, 00:21:43.396 "state": "configuring", 00:21:43.396 "raid_level": "raid0", 00:21:43.396 "superblock": true, 00:21:43.396 "num_base_bdevs": 4, 00:21:43.396 "num_base_bdevs_discovered": 3, 00:21:43.396 "num_base_bdevs_operational": 4, 00:21:43.396 "base_bdevs_list": [ 00:21:43.396 { 00:21:43.396 "name": null, 00:21:43.396 "uuid": "7faee48a-4ed4-4223-a16d-7cfdb8504048", 00:21:43.396 "is_configured": false, 00:21:43.396 "data_offset": 0, 00:21:43.396 "data_size": 63488 00:21:43.396 }, 00:21:43.396 { 00:21:43.396 "name": "BaseBdev2", 00:21:43.396 "uuid": "b1ed0cb3-e468-4753-a18d-068139f1953f", 00:21:43.396 "is_configured": true, 00:21:43.396 "data_offset": 2048, 00:21:43.396 "data_size": 63488 00:21:43.396 }, 00:21:43.396 { 00:21:43.396 "name": "BaseBdev3", 00:21:43.396 "uuid": "e6e66902-7334-443f-9283-97825500d4ec", 00:21:43.396 "is_configured": true, 00:21:43.396 "data_offset": 2048, 00:21:43.396 "data_size": 63488 00:21:43.396 }, 00:21:43.396 { 00:21:43.396 "name": "BaseBdev4", 00:21:43.396 "uuid": "42f16d8d-fde6-451f-89f5-b369c90000ce", 00:21:43.396 "is_configured": true, 00:21:43.396 "data_offset": 2048, 00:21:43.396 "data_size": 63488 00:21:43.396 } 00:21:43.396 ] 00:21:43.396 }' 00:21:43.396 12:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:43.396 12:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.657 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.657 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.657 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:43.657 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.657 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.657 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:21:43.657 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.657 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.657 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.657 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:43.657 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.918 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7faee48a-4ed4-4223-a16d-7cfdb8504048 00:21:43.918 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.918 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.918 [2024-12-05 12:53:26.278359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:43.918 [2024-12-05 12:53:26.278548] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:43.918 [2024-12-05 12:53:26.278558] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:43.918 NewBaseBdev 00:21:43.918 [2024-12-05 12:53:26.278771] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:21:43.918 [2024-12-05 12:53:26.278868] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:43.918 [2024-12-05 12:53:26.278876] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:21:43.918 [2024-12-05 12:53:26.278967] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:43.918 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.918 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:21:43.918 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:21:43.918 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:43.918 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:43.918 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:43.918 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:43.918 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:43.918 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.918 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.918 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.918 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:43.918 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.918 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.918 [ 00:21:43.918 { 00:21:43.918 "name": "NewBaseBdev", 00:21:43.918 "aliases": [ 00:21:43.918 "7faee48a-4ed4-4223-a16d-7cfdb8504048" 00:21:43.918 ], 00:21:43.918 "product_name": "Malloc disk", 00:21:43.918 "block_size": 512, 00:21:43.918 "num_blocks": 65536, 00:21:43.918 "uuid": "7faee48a-4ed4-4223-a16d-7cfdb8504048", 00:21:43.918 "assigned_rate_limits": { 00:21:43.918 "rw_ios_per_sec": 0, 00:21:43.918 "rw_mbytes_per_sec": 0, 00:21:43.918 "r_mbytes_per_sec": 0, 00:21:43.918 "w_mbytes_per_sec": 0 00:21:43.918 }, 00:21:43.918 "claimed": true, 00:21:43.918 "claim_type": "exclusive_write", 00:21:43.918 "zoned": false, 00:21:43.918 "supported_io_types": { 00:21:43.918 "read": true, 00:21:43.918 "write": true, 00:21:43.918 "unmap": true, 00:21:43.918 "flush": true, 00:21:43.918 "reset": true, 00:21:43.918 "nvme_admin": false, 00:21:43.918 "nvme_io": false, 00:21:43.918 "nvme_io_md": false, 00:21:43.918 "write_zeroes": true, 00:21:43.918 "zcopy": true, 00:21:43.918 "get_zone_info": false, 00:21:43.918 "zone_management": false, 00:21:43.918 "zone_append": false, 00:21:43.918 "compare": false, 00:21:43.918 "compare_and_write": false, 00:21:43.918 "abort": true, 00:21:43.918 "seek_hole": false, 00:21:43.918 "seek_data": false, 00:21:43.918 "copy": true, 00:21:43.918 "nvme_iov_md": false 00:21:43.918 }, 00:21:43.918 "memory_domains": [ 00:21:43.918 { 00:21:43.918 "dma_device_id": "system", 00:21:43.918 "dma_device_type": 1 00:21:43.918 }, 00:21:43.918 { 00:21:43.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:43.918 "dma_device_type": 2 00:21:43.918 } 00:21:43.918 ], 00:21:43.918 "driver_specific": {} 00:21:43.918 } 00:21:43.918 ] 00:21:43.918 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.918 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:43.918 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:21:43.918 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:43.918 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:43.918 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:43.918 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:43.918 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:43.918 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:43.918 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:43.918 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:43.918 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:43.918 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.918 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:43.918 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.918 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.918 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.918 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:43.918 "name": "Existed_Raid", 00:21:43.918 "uuid": "5d46c9ff-d8c8-4630-bbe0-46bab56c7f97", 00:21:43.918 "strip_size_kb": 64, 00:21:43.918 "state": "online", 00:21:43.918 "raid_level": "raid0", 00:21:43.918 "superblock": true, 00:21:43.918 "num_base_bdevs": 4, 00:21:43.918 "num_base_bdevs_discovered": 4, 00:21:43.918 "num_base_bdevs_operational": 4, 00:21:43.918 "base_bdevs_list": [ 00:21:43.918 { 00:21:43.918 "name": "NewBaseBdev", 00:21:43.918 "uuid": "7faee48a-4ed4-4223-a16d-7cfdb8504048", 00:21:43.918 "is_configured": true, 00:21:43.918 "data_offset": 2048, 00:21:43.918 "data_size": 63488 00:21:43.918 }, 00:21:43.918 { 00:21:43.918 "name": "BaseBdev2", 00:21:43.918 "uuid": "b1ed0cb3-e468-4753-a18d-068139f1953f", 00:21:43.918 "is_configured": true, 00:21:43.918 "data_offset": 2048, 00:21:43.918 "data_size": 63488 00:21:43.918 }, 00:21:43.918 { 00:21:43.918 "name": "BaseBdev3", 00:21:43.918 "uuid": "e6e66902-7334-443f-9283-97825500d4ec", 00:21:43.918 "is_configured": true, 00:21:43.918 "data_offset": 2048, 00:21:43.918 "data_size": 63488 00:21:43.918 }, 00:21:43.918 { 00:21:43.918 "name": "BaseBdev4", 00:21:43.918 "uuid": "42f16d8d-fde6-451f-89f5-b369c90000ce", 00:21:43.918 "is_configured": true, 00:21:43.918 "data_offset": 2048, 00:21:43.918 "data_size": 63488 00:21:43.918 } 00:21:43.918 ] 00:21:43.918 }' 00:21:43.918 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:43.918 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.179 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:21:44.179 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:44.179 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:44.179 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:44.179 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:21:44.179 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:44.179 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:44.179 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.179 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:44.179 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.179 [2024-12-05 12:53:26.598782] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:44.179 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.179 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:44.179 "name": "Existed_Raid", 00:21:44.179 "aliases": [ 00:21:44.179 "5d46c9ff-d8c8-4630-bbe0-46bab56c7f97" 00:21:44.179 ], 00:21:44.179 "product_name": "Raid Volume", 00:21:44.179 "block_size": 512, 00:21:44.179 "num_blocks": 253952, 00:21:44.179 "uuid": "5d46c9ff-d8c8-4630-bbe0-46bab56c7f97", 00:21:44.179 "assigned_rate_limits": { 00:21:44.179 "rw_ios_per_sec": 0, 00:21:44.179 "rw_mbytes_per_sec": 0, 00:21:44.179 "r_mbytes_per_sec": 0, 00:21:44.179 "w_mbytes_per_sec": 0 00:21:44.180 }, 00:21:44.180 "claimed": false, 00:21:44.180 "zoned": false, 00:21:44.180 "supported_io_types": { 00:21:44.180 "read": true, 00:21:44.180 "write": true, 00:21:44.180 "unmap": true, 00:21:44.180 "flush": true, 00:21:44.180 "reset": true, 00:21:44.180 "nvme_admin": false, 00:21:44.180 "nvme_io": false, 00:21:44.180 "nvme_io_md": false, 00:21:44.180 "write_zeroes": true, 00:21:44.180 "zcopy": false, 00:21:44.180 "get_zone_info": false, 00:21:44.180 "zone_management": false, 00:21:44.180 "zone_append": false, 00:21:44.180 "compare": false, 00:21:44.180 "compare_and_write": false, 00:21:44.180 "abort": false, 00:21:44.180 "seek_hole": false, 00:21:44.180 "seek_data": false, 00:21:44.180 "copy": false, 00:21:44.180 "nvme_iov_md": false 00:21:44.180 }, 00:21:44.180 "memory_domains": [ 00:21:44.180 { 00:21:44.180 "dma_device_id": "system", 00:21:44.180 "dma_device_type": 1 00:21:44.180 }, 00:21:44.180 { 00:21:44.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:44.180 "dma_device_type": 2 00:21:44.180 }, 00:21:44.180 { 00:21:44.180 "dma_device_id": "system", 00:21:44.180 "dma_device_type": 1 00:21:44.180 }, 00:21:44.180 { 00:21:44.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:44.180 "dma_device_type": 2 00:21:44.180 }, 00:21:44.180 { 00:21:44.180 "dma_device_id": "system", 00:21:44.180 "dma_device_type": 1 00:21:44.180 }, 00:21:44.180 { 00:21:44.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:44.180 "dma_device_type": 2 00:21:44.180 }, 00:21:44.180 { 00:21:44.180 "dma_device_id": "system", 00:21:44.180 "dma_device_type": 1 00:21:44.180 }, 00:21:44.180 { 00:21:44.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:44.180 "dma_device_type": 2 00:21:44.180 } 00:21:44.180 ], 00:21:44.180 "driver_specific": { 00:21:44.180 "raid": { 00:21:44.180 "uuid": "5d46c9ff-d8c8-4630-bbe0-46bab56c7f97", 00:21:44.180 "strip_size_kb": 64, 00:21:44.180 "state": "online", 00:21:44.180 "raid_level": "raid0", 00:21:44.180 "superblock": true, 00:21:44.180 "num_base_bdevs": 4, 00:21:44.180 "num_base_bdevs_discovered": 4, 00:21:44.180 "num_base_bdevs_operational": 4, 00:21:44.180 "base_bdevs_list": [ 00:21:44.180 { 00:21:44.180 "name": "NewBaseBdev", 00:21:44.180 "uuid": "7faee48a-4ed4-4223-a16d-7cfdb8504048", 00:21:44.180 "is_configured": true, 00:21:44.180 "data_offset": 2048, 00:21:44.180 "data_size": 63488 00:21:44.180 }, 00:21:44.180 { 00:21:44.180 "name": "BaseBdev2", 00:21:44.180 "uuid": "b1ed0cb3-e468-4753-a18d-068139f1953f", 00:21:44.180 "is_configured": true, 00:21:44.180 "data_offset": 2048, 00:21:44.180 "data_size": 63488 00:21:44.180 }, 00:21:44.180 { 00:21:44.180 "name": "BaseBdev3", 00:21:44.180 "uuid": "e6e66902-7334-443f-9283-97825500d4ec", 00:21:44.180 "is_configured": true, 00:21:44.180 "data_offset": 2048, 00:21:44.180 "data_size": 63488 00:21:44.180 }, 00:21:44.180 { 00:21:44.180 "name": "BaseBdev4", 00:21:44.180 "uuid": "42f16d8d-fde6-451f-89f5-b369c90000ce", 00:21:44.180 "is_configured": true, 00:21:44.180 "data_offset": 2048, 00:21:44.180 "data_size": 63488 00:21:44.180 } 00:21:44.180 ] 00:21:44.180 } 00:21:44.180 } 00:21:44.180 }' 00:21:44.180 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:44.180 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:21:44.180 BaseBdev2 00:21:44.180 BaseBdev3 00:21:44.180 BaseBdev4' 00:21:44.180 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:44.180 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:44.180 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:44.180 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:44.180 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:21:44.180 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.180 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.180 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.180 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:44.180 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:44.180 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:44.180 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:44.180 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:44.180 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.180 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.180 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.441 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:44.441 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:44.441 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:44.441 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:44.441 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.441 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.441 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:44.441 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.441 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:44.441 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:44.441 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:44.441 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:44.441 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:21:44.441 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.441 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.441 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.441 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:44.441 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:44.441 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:44.441 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.441 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.441 [2024-12-05 12:53:26.838507] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:44.441 [2024-12-05 12:53:26.838533] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:44.441 [2024-12-05 12:53:26.838589] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:44.441 [2024-12-05 12:53:26.838644] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:44.441 [2024-12-05 12:53:26.838652] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:21:44.441 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.441 12:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68147 00:21:44.441 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68147 ']' 00:21:44.441 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68147 00:21:44.441 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:21:44.441 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:44.441 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68147 00:21:44.441 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:44.441 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:44.441 killing process with pid 68147 00:21:44.441 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68147' 00:21:44.441 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68147 00:21:44.441 [2024-12-05 12:53:26.867646] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:44.441 12:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68147 00:21:44.779 [2024-12-05 12:53:27.061911] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:45.354 12:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:21:45.354 00:21:45.354 real 0m8.232s 00:21:45.354 user 0m13.161s 00:21:45.354 sys 0m1.311s 00:21:45.354 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:45.354 12:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:45.354 ************************************ 00:21:45.354 END TEST raid_state_function_test_sb 00:21:45.354 ************************************ 00:21:45.354 12:53:27 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:21:45.354 12:53:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:45.354 12:53:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:45.354 12:53:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:45.354 ************************************ 00:21:45.354 START TEST raid_superblock_test 00:21:45.354 ************************************ 00:21:45.354 12:53:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:21:45.354 12:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:21:45.354 12:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:21:45.354 12:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:45.354 12:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:45.354 12:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:45.354 12:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:45.354 12:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:45.354 12:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:45.354 12:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:45.354 12:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:45.354 12:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:45.354 12:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:45.354 12:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:45.354 12:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:21:45.354 12:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:21:45.354 12:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:21:45.354 12:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68784 00:21:45.354 12:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68784 00:21:45.354 12:53:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68784 ']' 00:21:45.354 12:53:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.354 12:53:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:45.354 12:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:45.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.354 12:53:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.354 12:53:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:45.354 12:53:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.354 [2024-12-05 12:53:27.756757] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:21:45.354 [2024-12-05 12:53:27.756864] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68784 ] 00:21:45.354 [2024-12-05 12:53:27.906959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.615 [2024-12-05 12:53:27.986745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.615 [2024-12-05 12:53:28.096483] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:45.615 [2024-12-05 12:53:28.096529] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.185 malloc1 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.185 [2024-12-05 12:53:28.644256] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:46.185 [2024-12-05 12:53:28.644307] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:46.185 [2024-12-05 12:53:28.644325] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:46.185 [2024-12-05 12:53:28.644332] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:46.185 [2024-12-05 12:53:28.646134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:46.185 [2024-12-05 12:53:28.646168] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:46.185 pt1 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.185 malloc2 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.185 [2024-12-05 12:53:28.679873] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:46.185 [2024-12-05 12:53:28.679916] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:46.185 [2024-12-05 12:53:28.679935] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:46.185 [2024-12-05 12:53:28.679942] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:46.185 [2024-12-05 12:53:28.681736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:46.185 [2024-12-05 12:53:28.681765] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:46.185 pt2 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.185 malloc3 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.185 12:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.185 [2024-12-05 12:53:28.729988] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:46.186 [2024-12-05 12:53:28.730037] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:46.186 [2024-12-05 12:53:28.730056] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:46.186 [2024-12-05 12:53:28.730063] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:46.186 [2024-12-05 12:53:28.731869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:46.186 [2024-12-05 12:53:28.731899] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:46.186 pt3 00:21:46.186 12:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.186 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:46.186 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:46.186 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:21:46.186 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:21:46.186 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:21:46.186 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:46.186 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:46.186 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:46.186 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:21:46.186 12:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.186 12:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.186 malloc4 00:21:46.186 12:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.186 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:46.186 12:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.186 12:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.446 [2024-12-05 12:53:28.769821] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:46.446 [2024-12-05 12:53:28.769872] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:46.446 [2024-12-05 12:53:28.769887] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:46.446 [2024-12-05 12:53:28.769894] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:46.446 [2024-12-05 12:53:28.771708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:46.446 [2024-12-05 12:53:28.771735] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:46.446 pt4 00:21:46.446 12:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.446 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:46.446 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:46.446 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:21:46.446 12:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.446 12:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.446 [2024-12-05 12:53:28.781858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:46.446 [2024-12-05 12:53:28.783404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:46.446 [2024-12-05 12:53:28.783479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:46.446 [2024-12-05 12:53:28.783528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:46.446 [2024-12-05 12:53:28.783691] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:46.446 [2024-12-05 12:53:28.783700] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:46.446 [2024-12-05 12:53:28.783924] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:46.446 [2024-12-05 12:53:28.784057] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:46.446 [2024-12-05 12:53:28.784066] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:46.446 [2024-12-05 12:53:28.784186] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:46.446 12:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.446 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:21:46.446 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:46.446 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:46.446 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:46.446 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:46.446 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:46.446 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:46.446 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:46.446 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:46.446 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:46.446 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.446 12:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.446 12:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.446 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.446 12:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.446 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:46.446 "name": "raid_bdev1", 00:21:46.446 "uuid": "d0baa4e4-a16e-4c8c-9322-42ae5711f5d6", 00:21:46.446 "strip_size_kb": 64, 00:21:46.446 "state": "online", 00:21:46.446 "raid_level": "raid0", 00:21:46.446 "superblock": true, 00:21:46.446 "num_base_bdevs": 4, 00:21:46.446 "num_base_bdevs_discovered": 4, 00:21:46.446 "num_base_bdevs_operational": 4, 00:21:46.446 "base_bdevs_list": [ 00:21:46.446 { 00:21:46.446 "name": "pt1", 00:21:46.446 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:46.446 "is_configured": true, 00:21:46.446 "data_offset": 2048, 00:21:46.446 "data_size": 63488 00:21:46.446 }, 00:21:46.446 { 00:21:46.446 "name": "pt2", 00:21:46.446 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:46.446 "is_configured": true, 00:21:46.446 "data_offset": 2048, 00:21:46.446 "data_size": 63488 00:21:46.446 }, 00:21:46.446 { 00:21:46.446 "name": "pt3", 00:21:46.446 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:46.446 "is_configured": true, 00:21:46.446 "data_offset": 2048, 00:21:46.446 "data_size": 63488 00:21:46.446 }, 00:21:46.446 { 00:21:46.446 "name": "pt4", 00:21:46.446 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:46.446 "is_configured": true, 00:21:46.446 "data_offset": 2048, 00:21:46.446 "data_size": 63488 00:21:46.446 } 00:21:46.446 ] 00:21:46.446 }' 00:21:46.446 12:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:46.446 12:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.723 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:46.723 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:46.723 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:46.723 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:46.723 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:46.723 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:46.723 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:46.723 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.723 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.723 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:46.723 [2024-12-05 12:53:29.118196] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:46.723 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.723 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:46.723 "name": "raid_bdev1", 00:21:46.723 "aliases": [ 00:21:46.723 "d0baa4e4-a16e-4c8c-9322-42ae5711f5d6" 00:21:46.723 ], 00:21:46.723 "product_name": "Raid Volume", 00:21:46.723 "block_size": 512, 00:21:46.723 "num_blocks": 253952, 00:21:46.723 "uuid": "d0baa4e4-a16e-4c8c-9322-42ae5711f5d6", 00:21:46.723 "assigned_rate_limits": { 00:21:46.723 "rw_ios_per_sec": 0, 00:21:46.723 "rw_mbytes_per_sec": 0, 00:21:46.723 "r_mbytes_per_sec": 0, 00:21:46.723 "w_mbytes_per_sec": 0 00:21:46.723 }, 00:21:46.723 "claimed": false, 00:21:46.723 "zoned": false, 00:21:46.723 "supported_io_types": { 00:21:46.723 "read": true, 00:21:46.723 "write": true, 00:21:46.723 "unmap": true, 00:21:46.723 "flush": true, 00:21:46.723 "reset": true, 00:21:46.723 "nvme_admin": false, 00:21:46.723 "nvme_io": false, 00:21:46.723 "nvme_io_md": false, 00:21:46.723 "write_zeroes": true, 00:21:46.723 "zcopy": false, 00:21:46.723 "get_zone_info": false, 00:21:46.723 "zone_management": false, 00:21:46.723 "zone_append": false, 00:21:46.723 "compare": false, 00:21:46.723 "compare_and_write": false, 00:21:46.723 "abort": false, 00:21:46.724 "seek_hole": false, 00:21:46.724 "seek_data": false, 00:21:46.724 "copy": false, 00:21:46.724 "nvme_iov_md": false 00:21:46.724 }, 00:21:46.724 "memory_domains": [ 00:21:46.724 { 00:21:46.724 "dma_device_id": "system", 00:21:46.724 "dma_device_type": 1 00:21:46.724 }, 00:21:46.724 { 00:21:46.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:46.724 "dma_device_type": 2 00:21:46.724 }, 00:21:46.724 { 00:21:46.724 "dma_device_id": "system", 00:21:46.724 "dma_device_type": 1 00:21:46.724 }, 00:21:46.724 { 00:21:46.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:46.724 "dma_device_type": 2 00:21:46.724 }, 00:21:46.724 { 00:21:46.724 "dma_device_id": "system", 00:21:46.724 "dma_device_type": 1 00:21:46.724 }, 00:21:46.724 { 00:21:46.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:46.724 "dma_device_type": 2 00:21:46.724 }, 00:21:46.724 { 00:21:46.724 "dma_device_id": "system", 00:21:46.724 "dma_device_type": 1 00:21:46.724 }, 00:21:46.724 { 00:21:46.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:46.724 "dma_device_type": 2 00:21:46.724 } 00:21:46.724 ], 00:21:46.724 "driver_specific": { 00:21:46.724 "raid": { 00:21:46.724 "uuid": "d0baa4e4-a16e-4c8c-9322-42ae5711f5d6", 00:21:46.724 "strip_size_kb": 64, 00:21:46.724 "state": "online", 00:21:46.724 "raid_level": "raid0", 00:21:46.724 "superblock": true, 00:21:46.724 "num_base_bdevs": 4, 00:21:46.724 "num_base_bdevs_discovered": 4, 00:21:46.724 "num_base_bdevs_operational": 4, 00:21:46.724 "base_bdevs_list": [ 00:21:46.724 { 00:21:46.724 "name": "pt1", 00:21:46.724 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:46.724 "is_configured": true, 00:21:46.724 "data_offset": 2048, 00:21:46.724 "data_size": 63488 00:21:46.724 }, 00:21:46.724 { 00:21:46.724 "name": "pt2", 00:21:46.724 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:46.724 "is_configured": true, 00:21:46.724 "data_offset": 2048, 00:21:46.724 "data_size": 63488 00:21:46.724 }, 00:21:46.724 { 00:21:46.724 "name": "pt3", 00:21:46.724 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:46.724 "is_configured": true, 00:21:46.724 "data_offset": 2048, 00:21:46.724 "data_size": 63488 00:21:46.724 }, 00:21:46.724 { 00:21:46.724 "name": "pt4", 00:21:46.724 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:46.724 "is_configured": true, 00:21:46.724 "data_offset": 2048, 00:21:46.724 "data_size": 63488 00:21:46.724 } 00:21:46.724 ] 00:21:46.724 } 00:21:46.724 } 00:21:46.724 }' 00:21:46.724 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:46.724 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:46.724 pt2 00:21:46.724 pt3 00:21:46.724 pt4' 00:21:46.724 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:46.724 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:46.724 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:46.724 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:46.724 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:46.724 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.724 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.724 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.724 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:46.724 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:46.724 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:46.724 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:46.724 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.724 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.724 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:46.724 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.724 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:46.724 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:46.724 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:46.724 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:46.724 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:21:46.724 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.724 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.724 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.724 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:46.724 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:46.724 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:46.724 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:21:46.724 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.724 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:46.724 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.986 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.986 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:46.986 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:46.986 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:46.986 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:46.986 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.986 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.986 [2024-12-05 12:53:29.342204] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:46.986 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.986 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d0baa4e4-a16e-4c8c-9322-42ae5711f5d6 00:21:46.986 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d0baa4e4-a16e-4c8c-9322-42ae5711f5d6 ']' 00:21:46.986 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:46.986 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.986 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.986 [2024-12-05 12:53:29.369950] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:46.986 [2024-12-05 12:53:29.369975] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:46.986 [2024-12-05 12:53:29.370041] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:46.986 [2024-12-05 12:53:29.370101] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:46.987 [2024-12-05 12:53:29.370113] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.987 [2024-12-05 12:53:29.477992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:46.987 [2024-12-05 12:53:29.479590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:46.987 [2024-12-05 12:53:29.479633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:46.987 [2024-12-05 12:53:29.479661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:21:46.987 [2024-12-05 12:53:29.479708] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:46.987 [2024-12-05 12:53:29.479748] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:46.987 [2024-12-05 12:53:29.479764] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:21:46.987 [2024-12-05 12:53:29.479779] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:21:46.987 [2024-12-05 12:53:29.479790] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:46.987 [2024-12-05 12:53:29.479801] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:46.987 request: 00:21:46.987 { 00:21:46.987 "name": "raid_bdev1", 00:21:46.987 "raid_level": "raid0", 00:21:46.987 "base_bdevs": [ 00:21:46.987 "malloc1", 00:21:46.987 "malloc2", 00:21:46.987 "malloc3", 00:21:46.987 "malloc4" 00:21:46.987 ], 00:21:46.987 "strip_size_kb": 64, 00:21:46.987 "superblock": false, 00:21:46.987 "method": "bdev_raid_create", 00:21:46.987 "req_id": 1 00:21:46.987 } 00:21:46.987 Got JSON-RPC error response 00:21:46.987 response: 00:21:46.987 { 00:21:46.987 "code": -17, 00:21:46.987 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:46.987 } 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.987 [2024-12-05 12:53:29.517972] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:46.987 [2024-12-05 12:53:29.518026] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:46.987 [2024-12-05 12:53:29.518042] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:46.987 [2024-12-05 12:53:29.518051] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:46.987 [2024-12-05 12:53:29.519919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:46.987 [2024-12-05 12:53:29.520036] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:46.987 [2024-12-05 12:53:29.520116] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:46.987 [2024-12-05 12:53:29.520165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:46.987 pt1 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.987 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:46.987 "name": "raid_bdev1", 00:21:46.987 "uuid": "d0baa4e4-a16e-4c8c-9322-42ae5711f5d6", 00:21:46.987 "strip_size_kb": 64, 00:21:46.987 "state": "configuring", 00:21:46.987 "raid_level": "raid0", 00:21:46.987 "superblock": true, 00:21:46.987 "num_base_bdevs": 4, 00:21:46.987 "num_base_bdevs_discovered": 1, 00:21:46.987 "num_base_bdevs_operational": 4, 00:21:46.987 "base_bdevs_list": [ 00:21:46.987 { 00:21:46.987 "name": "pt1", 00:21:46.987 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:46.987 "is_configured": true, 00:21:46.987 "data_offset": 2048, 00:21:46.987 "data_size": 63488 00:21:46.987 }, 00:21:46.988 { 00:21:46.988 "name": null, 00:21:46.988 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:46.988 "is_configured": false, 00:21:46.988 "data_offset": 2048, 00:21:46.988 "data_size": 63488 00:21:46.988 }, 00:21:46.988 { 00:21:46.988 "name": null, 00:21:46.988 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:46.988 "is_configured": false, 00:21:46.988 "data_offset": 2048, 00:21:46.988 "data_size": 63488 00:21:46.988 }, 00:21:46.988 { 00:21:46.988 "name": null, 00:21:46.988 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:46.988 "is_configured": false, 00:21:46.988 "data_offset": 2048, 00:21:46.988 "data_size": 63488 00:21:46.988 } 00:21:46.988 ] 00:21:46.988 }' 00:21:46.988 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:46.988 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.557 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:21:47.557 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:47.557 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.557 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.557 [2024-12-05 12:53:29.850036] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:47.557 [2024-12-05 12:53:29.850092] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:47.557 [2024-12-05 12:53:29.850110] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:47.557 [2024-12-05 12:53:29.850119] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:47.557 [2024-12-05 12:53:29.850459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:47.557 [2024-12-05 12:53:29.850471] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:47.557 [2024-12-05 12:53:29.850539] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:47.557 [2024-12-05 12:53:29.850557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:47.557 pt2 00:21:47.557 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.557 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:21:47.557 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.557 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.557 [2024-12-05 12:53:29.858046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:47.557 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.557 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:21:47.557 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:47.557 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:47.557 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:47.557 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:47.557 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:47.557 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:47.557 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:47.557 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:47.557 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:47.557 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:47.557 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:47.557 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.557 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.557 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.557 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:47.557 "name": "raid_bdev1", 00:21:47.557 "uuid": "d0baa4e4-a16e-4c8c-9322-42ae5711f5d6", 00:21:47.557 "strip_size_kb": 64, 00:21:47.557 "state": "configuring", 00:21:47.557 "raid_level": "raid0", 00:21:47.557 "superblock": true, 00:21:47.557 "num_base_bdevs": 4, 00:21:47.557 "num_base_bdevs_discovered": 1, 00:21:47.557 "num_base_bdevs_operational": 4, 00:21:47.557 "base_bdevs_list": [ 00:21:47.557 { 00:21:47.557 "name": "pt1", 00:21:47.557 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:47.557 "is_configured": true, 00:21:47.557 "data_offset": 2048, 00:21:47.557 "data_size": 63488 00:21:47.557 }, 00:21:47.557 { 00:21:47.557 "name": null, 00:21:47.557 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:47.557 "is_configured": false, 00:21:47.557 "data_offset": 0, 00:21:47.557 "data_size": 63488 00:21:47.557 }, 00:21:47.557 { 00:21:47.557 "name": null, 00:21:47.557 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:47.557 "is_configured": false, 00:21:47.557 "data_offset": 2048, 00:21:47.557 "data_size": 63488 00:21:47.557 }, 00:21:47.557 { 00:21:47.557 "name": null, 00:21:47.557 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:47.557 "is_configured": false, 00:21:47.557 "data_offset": 2048, 00:21:47.557 "data_size": 63488 00:21:47.557 } 00:21:47.557 ] 00:21:47.557 }' 00:21:47.557 12:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:47.557 12:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.816 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:47.816 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:47.816 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:47.816 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.816 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.816 [2024-12-05 12:53:30.162097] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:47.816 [2024-12-05 12:53:30.162153] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:47.816 [2024-12-05 12:53:30.162168] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:47.816 [2024-12-05 12:53:30.162175] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:47.816 [2024-12-05 12:53:30.162530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:47.816 [2024-12-05 12:53:30.162542] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:47.816 [2024-12-05 12:53:30.162602] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:47.816 [2024-12-05 12:53:30.162617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:47.816 pt2 00:21:47.816 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.816 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:47.816 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:47.816 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:47.816 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.816 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.816 [2024-12-05 12:53:30.170081] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:47.816 [2024-12-05 12:53:30.170120] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:47.816 [2024-12-05 12:53:30.170134] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:47.816 [2024-12-05 12:53:30.170141] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:47.816 [2024-12-05 12:53:30.170464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:47.816 [2024-12-05 12:53:30.170474] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:47.816 [2024-12-05 12:53:30.170539] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:47.816 [2024-12-05 12:53:30.170557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:47.816 pt3 00:21:47.816 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.816 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:47.816 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:47.816 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:47.816 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.816 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.816 [2024-12-05 12:53:30.178062] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:47.816 [2024-12-05 12:53:30.178100] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:47.816 [2024-12-05 12:53:30.178114] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:21:47.816 [2024-12-05 12:53:30.178120] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:47.816 [2024-12-05 12:53:30.178433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:47.816 [2024-12-05 12:53:30.178443] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:47.816 [2024-12-05 12:53:30.178512] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:21:47.816 [2024-12-05 12:53:30.178529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:47.816 [2024-12-05 12:53:30.178639] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:47.816 [2024-12-05 12:53:30.178647] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:47.816 [2024-12-05 12:53:30.178849] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:47.816 [2024-12-05 12:53:30.178955] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:47.816 [2024-12-05 12:53:30.178963] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:47.816 [2024-12-05 12:53:30.179060] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:47.816 pt4 00:21:47.816 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.816 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:47.817 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:47.817 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:21:47.817 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:47.817 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:47.817 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:47.817 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:47.817 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:47.817 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:47.817 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:47.817 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:47.817 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:47.817 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:47.817 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:47.817 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.817 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.817 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.817 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:47.817 "name": "raid_bdev1", 00:21:47.817 "uuid": "d0baa4e4-a16e-4c8c-9322-42ae5711f5d6", 00:21:47.817 "strip_size_kb": 64, 00:21:47.817 "state": "online", 00:21:47.817 "raid_level": "raid0", 00:21:47.817 "superblock": true, 00:21:47.817 "num_base_bdevs": 4, 00:21:47.817 "num_base_bdevs_discovered": 4, 00:21:47.817 "num_base_bdevs_operational": 4, 00:21:47.817 "base_bdevs_list": [ 00:21:47.817 { 00:21:47.817 "name": "pt1", 00:21:47.817 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:47.817 "is_configured": true, 00:21:47.817 "data_offset": 2048, 00:21:47.817 "data_size": 63488 00:21:47.817 }, 00:21:47.817 { 00:21:47.817 "name": "pt2", 00:21:47.817 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:47.817 "is_configured": true, 00:21:47.817 "data_offset": 2048, 00:21:47.817 "data_size": 63488 00:21:47.817 }, 00:21:47.817 { 00:21:47.817 "name": "pt3", 00:21:47.817 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:47.817 "is_configured": true, 00:21:47.817 "data_offset": 2048, 00:21:47.817 "data_size": 63488 00:21:47.817 }, 00:21:47.817 { 00:21:47.817 "name": "pt4", 00:21:47.817 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:47.817 "is_configured": true, 00:21:47.817 "data_offset": 2048, 00:21:47.817 "data_size": 63488 00:21:47.817 } 00:21:47.817 ] 00:21:47.817 }' 00:21:47.817 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:47.817 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.076 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:48.076 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:48.076 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:48.076 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:48.076 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:48.076 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:48.076 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:48.076 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.076 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.076 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:48.076 [2024-12-05 12:53:30.510448] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:48.076 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.076 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:48.076 "name": "raid_bdev1", 00:21:48.076 "aliases": [ 00:21:48.076 "d0baa4e4-a16e-4c8c-9322-42ae5711f5d6" 00:21:48.076 ], 00:21:48.076 "product_name": "Raid Volume", 00:21:48.076 "block_size": 512, 00:21:48.076 "num_blocks": 253952, 00:21:48.076 "uuid": "d0baa4e4-a16e-4c8c-9322-42ae5711f5d6", 00:21:48.076 "assigned_rate_limits": { 00:21:48.076 "rw_ios_per_sec": 0, 00:21:48.076 "rw_mbytes_per_sec": 0, 00:21:48.076 "r_mbytes_per_sec": 0, 00:21:48.076 "w_mbytes_per_sec": 0 00:21:48.076 }, 00:21:48.076 "claimed": false, 00:21:48.076 "zoned": false, 00:21:48.076 "supported_io_types": { 00:21:48.076 "read": true, 00:21:48.076 "write": true, 00:21:48.076 "unmap": true, 00:21:48.076 "flush": true, 00:21:48.076 "reset": true, 00:21:48.076 "nvme_admin": false, 00:21:48.076 "nvme_io": false, 00:21:48.076 "nvme_io_md": false, 00:21:48.076 "write_zeroes": true, 00:21:48.076 "zcopy": false, 00:21:48.076 "get_zone_info": false, 00:21:48.076 "zone_management": false, 00:21:48.076 "zone_append": false, 00:21:48.076 "compare": false, 00:21:48.076 "compare_and_write": false, 00:21:48.076 "abort": false, 00:21:48.076 "seek_hole": false, 00:21:48.076 "seek_data": false, 00:21:48.076 "copy": false, 00:21:48.076 "nvme_iov_md": false 00:21:48.076 }, 00:21:48.076 "memory_domains": [ 00:21:48.076 { 00:21:48.076 "dma_device_id": "system", 00:21:48.076 "dma_device_type": 1 00:21:48.076 }, 00:21:48.076 { 00:21:48.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:48.076 "dma_device_type": 2 00:21:48.076 }, 00:21:48.076 { 00:21:48.076 "dma_device_id": "system", 00:21:48.076 "dma_device_type": 1 00:21:48.076 }, 00:21:48.076 { 00:21:48.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:48.076 "dma_device_type": 2 00:21:48.076 }, 00:21:48.076 { 00:21:48.076 "dma_device_id": "system", 00:21:48.076 "dma_device_type": 1 00:21:48.076 }, 00:21:48.076 { 00:21:48.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:48.076 "dma_device_type": 2 00:21:48.076 }, 00:21:48.076 { 00:21:48.076 "dma_device_id": "system", 00:21:48.076 "dma_device_type": 1 00:21:48.076 }, 00:21:48.076 { 00:21:48.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:48.076 "dma_device_type": 2 00:21:48.076 } 00:21:48.076 ], 00:21:48.076 "driver_specific": { 00:21:48.076 "raid": { 00:21:48.076 "uuid": "d0baa4e4-a16e-4c8c-9322-42ae5711f5d6", 00:21:48.076 "strip_size_kb": 64, 00:21:48.076 "state": "online", 00:21:48.076 "raid_level": "raid0", 00:21:48.076 "superblock": true, 00:21:48.076 "num_base_bdevs": 4, 00:21:48.076 "num_base_bdevs_discovered": 4, 00:21:48.076 "num_base_bdevs_operational": 4, 00:21:48.076 "base_bdevs_list": [ 00:21:48.076 { 00:21:48.076 "name": "pt1", 00:21:48.076 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:48.076 "is_configured": true, 00:21:48.076 "data_offset": 2048, 00:21:48.076 "data_size": 63488 00:21:48.076 }, 00:21:48.076 { 00:21:48.076 "name": "pt2", 00:21:48.076 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:48.076 "is_configured": true, 00:21:48.076 "data_offset": 2048, 00:21:48.076 "data_size": 63488 00:21:48.076 }, 00:21:48.076 { 00:21:48.076 "name": "pt3", 00:21:48.076 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:48.076 "is_configured": true, 00:21:48.076 "data_offset": 2048, 00:21:48.076 "data_size": 63488 00:21:48.076 }, 00:21:48.076 { 00:21:48.076 "name": "pt4", 00:21:48.076 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:48.076 "is_configured": true, 00:21:48.076 "data_offset": 2048, 00:21:48.076 "data_size": 63488 00:21:48.076 } 00:21:48.076 ] 00:21:48.076 } 00:21:48.076 } 00:21:48.076 }' 00:21:48.076 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:48.076 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:48.076 pt2 00:21:48.076 pt3 00:21:48.076 pt4' 00:21:48.076 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:48.076 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:48.076 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:48.076 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:48.076 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:48.076 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.076 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.076 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.076 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:48.076 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:48.076 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:48.076 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:48.076 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:48.076 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.076 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.076 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.337 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:48.337 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:48.337 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:48.337 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:21:48.337 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.337 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.337 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:48.337 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.337 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:48.337 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:48.337 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:48.337 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:21:48.337 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.337 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.337 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:48.337 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.337 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:48.337 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:48.337 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:48.337 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:48.337 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.337 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.337 [2024-12-05 12:53:30.734456] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:48.337 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.337 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d0baa4e4-a16e-4c8c-9322-42ae5711f5d6 '!=' d0baa4e4-a16e-4c8c-9322-42ae5711f5d6 ']' 00:21:48.337 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:21:48.337 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:48.337 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:21:48.337 12:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68784 00:21:48.337 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68784 ']' 00:21:48.337 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68784 00:21:48.337 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:21:48.337 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:48.337 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68784 00:21:48.337 killing process with pid 68784 00:21:48.337 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:48.337 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:48.337 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68784' 00:21:48.337 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68784 00:21:48.337 [2024-12-05 12:53:30.786822] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:48.337 [2024-12-05 12:53:30.786893] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:48.337 12:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68784 00:21:48.337 [2024-12-05 12:53:30.786955] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:48.337 [2024-12-05 12:53:30.786963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:48.599 [2024-12-05 12:53:30.983323] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:49.172 12:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:21:49.172 00:21:49.172 real 0m3.862s 00:21:49.172 user 0m5.647s 00:21:49.172 sys 0m0.585s 00:21:49.172 12:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:49.172 12:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.172 ************************************ 00:21:49.172 END TEST raid_superblock_test 00:21:49.172 ************************************ 00:21:49.172 12:53:31 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:21:49.172 12:53:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:49.172 12:53:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:49.172 12:53:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:49.172 ************************************ 00:21:49.172 START TEST raid_read_error_test 00:21:49.172 ************************************ 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.58r8t8orNQ 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69026 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69026 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69026 ']' 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:49.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:49.172 12:53:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.172 [2024-12-05 12:53:31.662167] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:21:49.172 [2024-12-05 12:53:31.662268] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69026 ] 00:21:49.433 [2024-12-05 12:53:31.817255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.433 [2024-12-05 12:53:31.959056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.694 [2024-12-05 12:53:32.095706] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:49.694 [2024-12-05 12:53:32.095762] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:49.970 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:49.970 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:21:49.970 12:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:49.970 12:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:49.970 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.970 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.289 BaseBdev1_malloc 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.289 true 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.289 [2024-12-05 12:53:32.582025] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:50.289 [2024-12-05 12:53:32.582083] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:50.289 [2024-12-05 12:53:32.582105] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:21:50.289 [2024-12-05 12:53:32.582116] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:50.289 [2024-12-05 12:53:32.584305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:50.289 [2024-12-05 12:53:32.584343] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:50.289 BaseBdev1 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.289 BaseBdev2_malloc 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.289 true 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.289 [2024-12-05 12:53:32.630158] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:50.289 [2024-12-05 12:53:32.630209] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:50.289 [2024-12-05 12:53:32.630226] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:50.289 [2024-12-05 12:53:32.630238] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:50.289 [2024-12-05 12:53:32.632379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:50.289 [2024-12-05 12:53:32.632414] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:50.289 BaseBdev2 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.289 BaseBdev3_malloc 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.289 true 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.289 [2024-12-05 12:53:32.694292] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:50.289 [2024-12-05 12:53:32.694350] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:50.289 [2024-12-05 12:53:32.694370] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:50.289 [2024-12-05 12:53:32.694380] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:50.289 [2024-12-05 12:53:32.696570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:50.289 [2024-12-05 12:53:32.696722] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:50.289 BaseBdev3 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.289 BaseBdev4_malloc 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.289 true 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.289 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.289 [2024-12-05 12:53:32.738376] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:21:50.289 [2024-12-05 12:53:32.738431] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:50.289 [2024-12-05 12:53:32.738449] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:50.289 [2024-12-05 12:53:32.738460] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:50.289 [2024-12-05 12:53:32.740612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:50.289 [2024-12-05 12:53:32.740650] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:50.289 BaseBdev4 00:21:50.290 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.290 12:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:21:50.290 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.290 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.290 [2024-12-05 12:53:32.746442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:50.290 [2024-12-05 12:53:32.748299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:50.290 [2024-12-05 12:53:32.748481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:50.290 [2024-12-05 12:53:32.748567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:50.290 [2024-12-05 12:53:32.748789] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:21:50.290 [2024-12-05 12:53:32.748806] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:50.290 [2024-12-05 12:53:32.749056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:21:50.290 [2024-12-05 12:53:32.749202] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:21:50.290 [2024-12-05 12:53:32.749212] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:21:50.290 [2024-12-05 12:53:32.749360] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:50.290 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.290 12:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:21:50.290 12:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:50.290 12:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:50.290 12:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:50.290 12:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:50.290 12:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:50.290 12:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:50.290 12:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:50.290 12:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:50.290 12:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:50.290 12:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.290 12:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.290 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.290 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.290 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.290 12:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:50.290 "name": "raid_bdev1", 00:21:50.290 "uuid": "d46d749b-54e3-48c0-b147-a0b8d5f2bb84", 00:21:50.290 "strip_size_kb": 64, 00:21:50.290 "state": "online", 00:21:50.290 "raid_level": "raid0", 00:21:50.290 "superblock": true, 00:21:50.290 "num_base_bdevs": 4, 00:21:50.290 "num_base_bdevs_discovered": 4, 00:21:50.290 "num_base_bdevs_operational": 4, 00:21:50.290 "base_bdevs_list": [ 00:21:50.290 { 00:21:50.290 "name": "BaseBdev1", 00:21:50.290 "uuid": "e2e58216-3381-55d3-b0c1-27e074052ec3", 00:21:50.290 "is_configured": true, 00:21:50.290 "data_offset": 2048, 00:21:50.290 "data_size": 63488 00:21:50.290 }, 00:21:50.290 { 00:21:50.290 "name": "BaseBdev2", 00:21:50.290 "uuid": "a7f52e51-120c-50c2-8253-539760103510", 00:21:50.290 "is_configured": true, 00:21:50.290 "data_offset": 2048, 00:21:50.290 "data_size": 63488 00:21:50.290 }, 00:21:50.290 { 00:21:50.290 "name": "BaseBdev3", 00:21:50.290 "uuid": "d6f5666a-e455-5d61-85d2-e9af68b6f540", 00:21:50.290 "is_configured": true, 00:21:50.290 "data_offset": 2048, 00:21:50.290 "data_size": 63488 00:21:50.290 }, 00:21:50.290 { 00:21:50.290 "name": "BaseBdev4", 00:21:50.290 "uuid": "5ac64853-de19-56dc-8e12-440466a46daa", 00:21:50.290 "is_configured": true, 00:21:50.290 "data_offset": 2048, 00:21:50.290 "data_size": 63488 00:21:50.290 } 00:21:50.290 ] 00:21:50.290 }' 00:21:50.290 12:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:50.290 12:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.562 12:53:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:21:50.562 12:53:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:21:50.562 [2024-12-05 12:53:33.139503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:21:51.496 12:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:21:51.496 12:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.496 12:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.496 12:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.496 12:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:21:51.496 12:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:21:51.496 12:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:21:51.496 12:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:21:51.496 12:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:51.496 12:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:51.496 12:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:51.496 12:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:51.496 12:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:51.496 12:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:51.496 12:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:51.496 12:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:51.496 12:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:51.496 12:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.497 12:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.497 12:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.497 12:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.757 12:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.757 12:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:51.757 "name": "raid_bdev1", 00:21:51.757 "uuid": "d46d749b-54e3-48c0-b147-a0b8d5f2bb84", 00:21:51.757 "strip_size_kb": 64, 00:21:51.757 "state": "online", 00:21:51.757 "raid_level": "raid0", 00:21:51.757 "superblock": true, 00:21:51.757 "num_base_bdevs": 4, 00:21:51.757 "num_base_bdevs_discovered": 4, 00:21:51.757 "num_base_bdevs_operational": 4, 00:21:51.757 "base_bdevs_list": [ 00:21:51.757 { 00:21:51.757 "name": "BaseBdev1", 00:21:51.757 "uuid": "e2e58216-3381-55d3-b0c1-27e074052ec3", 00:21:51.757 "is_configured": true, 00:21:51.757 "data_offset": 2048, 00:21:51.757 "data_size": 63488 00:21:51.757 }, 00:21:51.757 { 00:21:51.757 "name": "BaseBdev2", 00:21:51.757 "uuid": "a7f52e51-120c-50c2-8253-539760103510", 00:21:51.757 "is_configured": true, 00:21:51.757 "data_offset": 2048, 00:21:51.757 "data_size": 63488 00:21:51.757 }, 00:21:51.757 { 00:21:51.757 "name": "BaseBdev3", 00:21:51.757 "uuid": "d6f5666a-e455-5d61-85d2-e9af68b6f540", 00:21:51.757 "is_configured": true, 00:21:51.757 "data_offset": 2048, 00:21:51.757 "data_size": 63488 00:21:51.757 }, 00:21:51.757 { 00:21:51.757 "name": "BaseBdev4", 00:21:51.757 "uuid": "5ac64853-de19-56dc-8e12-440466a46daa", 00:21:51.757 "is_configured": true, 00:21:51.757 "data_offset": 2048, 00:21:51.757 "data_size": 63488 00:21:51.757 } 00:21:51.757 ] 00:21:51.757 }' 00:21:51.757 12:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:51.757 12:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.016 12:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:52.016 12:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.016 12:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.016 [2024-12-05 12:53:34.373960] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:52.016 [2024-12-05 12:53:34.373989] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:52.016 [2024-12-05 12:53:34.377206] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:52.016 [2024-12-05 12:53:34.377347] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:52.016 [2024-12-05 12:53:34.377454] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:52.016 [2024-12-05 12:53:34.377594] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:21:52.016 { 00:21:52.016 "results": [ 00:21:52.016 { 00:21:52.016 "job": "raid_bdev1", 00:21:52.016 "core_mask": "0x1", 00:21:52.016 "workload": "randrw", 00:21:52.016 "percentage": 50, 00:21:52.016 "status": "finished", 00:21:52.016 "queue_depth": 1, 00:21:52.016 "io_size": 131072, 00:21:52.016 "runtime": 1.232593, 00:21:52.016 "iops": 14214.748907384675, 00:21:52.016 "mibps": 1776.8436134230844, 00:21:52.016 "io_failed": 1, 00:21:52.016 "io_timeout": 0, 00:21:52.016 "avg_latency_us": 96.02306164557962, 00:21:52.016 "min_latency_us": 34.067692307692305, 00:21:52.016 "max_latency_us": 1676.2092307692308 00:21:52.016 } 00:21:52.016 ], 00:21:52.016 "core_count": 1 00:21:52.016 } 00:21:52.016 12:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.016 12:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69026 00:21:52.016 12:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69026 ']' 00:21:52.016 12:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69026 00:21:52.016 12:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:21:52.016 12:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:52.016 12:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69026 00:21:52.016 12:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:52.016 12:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:52.016 killing process with pid 69026 00:21:52.016 12:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69026' 00:21:52.016 12:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69026 00:21:52.016 [2024-12-05 12:53:34.406421] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:52.016 12:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69026 00:21:52.277 [2024-12-05 12:53:34.606832] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:52.847 12:53:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:21:52.847 12:53:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:21:52.847 12:53:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.58r8t8orNQ 00:21:52.847 12:53:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.81 00:21:52.847 12:53:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:21:52.847 12:53:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:52.847 12:53:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:21:52.847 12:53:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.81 != \0\.\0\0 ]] 00:21:52.847 00:21:52.847 real 0m3.767s 00:21:52.847 user 0m4.445s 00:21:52.847 sys 0m0.407s 00:21:52.847 ************************************ 00:21:52.847 END TEST raid_read_error_test 00:21:52.847 ************************************ 00:21:52.847 12:53:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:52.847 12:53:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.847 12:53:35 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:21:52.847 12:53:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:52.847 12:53:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:52.847 12:53:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:52.847 ************************************ 00:21:52.847 START TEST raid_write_error_test 00:21:52.847 ************************************ 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.XY5cx4tE2A 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69161 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69161 00:21:52.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69161 ']' 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.847 12:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:53.107 [2024-12-05 12:53:35.475408] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:21:53.107 [2024-12-05 12:53:35.475555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69161 ] 00:21:53.107 [2024-12-05 12:53:35.634855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.386 [2024-12-05 12:53:35.737647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.386 [2024-12-05 12:53:35.875217] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:53.386 [2024-12-05 12:53:35.875259] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.956 BaseBdev1_malloc 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.956 true 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.956 [2024-12-05 12:53:36.424576] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:53.956 [2024-12-05 12:53:36.424797] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:53.956 [2024-12-05 12:53:36.424824] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:21:53.956 [2024-12-05 12:53:36.424835] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:53.956 [2024-12-05 12:53:36.426940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:53.956 [2024-12-05 12:53:36.426978] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:53.956 BaseBdev1 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.956 BaseBdev2_malloc 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.956 true 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.956 [2024-12-05 12:53:36.468354] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:53.956 [2024-12-05 12:53:36.468542] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:53.956 [2024-12-05 12:53:36.468566] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:53.956 [2024-12-05 12:53:36.468577] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:53.956 [2024-12-05 12:53:36.470664] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:53.956 [2024-12-05 12:53:36.470695] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:53.956 BaseBdev2 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.956 BaseBdev3_malloc 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.956 true 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.956 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.956 [2024-12-05 12:53:36.523620] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:53.957 [2024-12-05 12:53:36.523675] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:53.957 [2024-12-05 12:53:36.523694] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:53.957 [2024-12-05 12:53:36.523712] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:53.957 [2024-12-05 12:53:36.525826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:53.957 [2024-12-05 12:53:36.525862] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:53.957 BaseBdev3 00:21:53.957 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.957 12:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:53.957 12:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:53.957 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.957 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.217 BaseBdev4_malloc 00:21:54.217 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.217 12:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:21:54.217 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.217 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.217 true 00:21:54.217 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.217 12:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:21:54.217 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.217 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.217 [2024-12-05 12:53:36.567532] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:21:54.217 [2024-12-05 12:53:36.567582] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:54.217 [2024-12-05 12:53:36.567599] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:54.217 [2024-12-05 12:53:36.567610] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:54.217 [2024-12-05 12:53:36.569697] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:54.217 [2024-12-05 12:53:36.569734] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:54.217 BaseBdev4 00:21:54.217 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.217 12:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:21:54.217 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.217 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.217 [2024-12-05 12:53:36.575587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:54.217 [2024-12-05 12:53:36.577415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:54.217 [2024-12-05 12:53:36.577502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:54.217 [2024-12-05 12:53:36.577572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:54.217 [2024-12-05 12:53:36.577787] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:21:54.217 [2024-12-05 12:53:36.577803] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:54.217 [2024-12-05 12:53:36.578043] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:21:54.217 [2024-12-05 12:53:36.578186] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:21:54.217 [2024-12-05 12:53:36.578196] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:21:54.217 [2024-12-05 12:53:36.578334] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:54.217 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.217 12:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:21:54.217 12:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:54.217 12:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:54.217 12:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:54.217 12:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:54.217 12:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:54.217 12:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:54.217 12:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:54.217 12:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:54.217 12:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:54.217 12:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.217 12:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:54.217 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.217 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.217 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.217 12:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:54.217 "name": "raid_bdev1", 00:21:54.217 "uuid": "f26fc2c6-e349-400f-b89c-86bb28a5a734", 00:21:54.217 "strip_size_kb": 64, 00:21:54.217 "state": "online", 00:21:54.217 "raid_level": "raid0", 00:21:54.217 "superblock": true, 00:21:54.217 "num_base_bdevs": 4, 00:21:54.217 "num_base_bdevs_discovered": 4, 00:21:54.217 "num_base_bdevs_operational": 4, 00:21:54.217 "base_bdevs_list": [ 00:21:54.217 { 00:21:54.217 "name": "BaseBdev1", 00:21:54.217 "uuid": "49832d36-86bc-5467-9521-30a21e63b0da", 00:21:54.217 "is_configured": true, 00:21:54.217 "data_offset": 2048, 00:21:54.217 "data_size": 63488 00:21:54.217 }, 00:21:54.217 { 00:21:54.217 "name": "BaseBdev2", 00:21:54.217 "uuid": "b0465258-0d1d-50e2-b7a7-6fe6c6f1c52b", 00:21:54.217 "is_configured": true, 00:21:54.217 "data_offset": 2048, 00:21:54.217 "data_size": 63488 00:21:54.217 }, 00:21:54.217 { 00:21:54.217 "name": "BaseBdev3", 00:21:54.217 "uuid": "5411e417-9ea9-5c5d-8465-4efbf7ffb892", 00:21:54.217 "is_configured": true, 00:21:54.217 "data_offset": 2048, 00:21:54.217 "data_size": 63488 00:21:54.217 }, 00:21:54.217 { 00:21:54.217 "name": "BaseBdev4", 00:21:54.217 "uuid": "5ac37206-6167-591d-b644-73e74400e624", 00:21:54.217 "is_configured": true, 00:21:54.217 "data_offset": 2048, 00:21:54.217 "data_size": 63488 00:21:54.217 } 00:21:54.217 ] 00:21:54.217 }' 00:21:54.217 12:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:54.217 12:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.477 12:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:21:54.477 12:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:21:54.477 [2024-12-05 12:53:36.964599] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:21:55.417 12:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:21:55.417 12:53:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.417 12:53:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.417 12:53:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.417 12:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:21:55.417 12:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:21:55.417 12:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:21:55.417 12:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:21:55.417 12:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:55.417 12:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:55.417 12:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:55.417 12:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:55.417 12:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:55.417 12:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:55.417 12:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:55.417 12:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:55.417 12:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:55.417 12:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.417 12:53:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.417 12:53:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.417 12:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.417 12:53:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.417 12:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:55.417 "name": "raid_bdev1", 00:21:55.418 "uuid": "f26fc2c6-e349-400f-b89c-86bb28a5a734", 00:21:55.418 "strip_size_kb": 64, 00:21:55.418 "state": "online", 00:21:55.418 "raid_level": "raid0", 00:21:55.418 "superblock": true, 00:21:55.418 "num_base_bdevs": 4, 00:21:55.418 "num_base_bdevs_discovered": 4, 00:21:55.418 "num_base_bdevs_operational": 4, 00:21:55.418 "base_bdevs_list": [ 00:21:55.418 { 00:21:55.418 "name": "BaseBdev1", 00:21:55.418 "uuid": "49832d36-86bc-5467-9521-30a21e63b0da", 00:21:55.418 "is_configured": true, 00:21:55.418 "data_offset": 2048, 00:21:55.418 "data_size": 63488 00:21:55.418 }, 00:21:55.418 { 00:21:55.418 "name": "BaseBdev2", 00:21:55.418 "uuid": "b0465258-0d1d-50e2-b7a7-6fe6c6f1c52b", 00:21:55.418 "is_configured": true, 00:21:55.418 "data_offset": 2048, 00:21:55.418 "data_size": 63488 00:21:55.418 }, 00:21:55.418 { 00:21:55.418 "name": "BaseBdev3", 00:21:55.418 "uuid": "5411e417-9ea9-5c5d-8465-4efbf7ffb892", 00:21:55.418 "is_configured": true, 00:21:55.418 "data_offset": 2048, 00:21:55.418 "data_size": 63488 00:21:55.418 }, 00:21:55.418 { 00:21:55.418 "name": "BaseBdev4", 00:21:55.418 "uuid": "5ac37206-6167-591d-b644-73e74400e624", 00:21:55.418 "is_configured": true, 00:21:55.418 "data_offset": 2048, 00:21:55.418 "data_size": 63488 00:21:55.418 } 00:21:55.418 ] 00:21:55.418 }' 00:21:55.418 12:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:55.418 12:53:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.678 12:53:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:55.678 12:53:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.678 12:53:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.678 [2024-12-05 12:53:38.202570] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:55.678 [2024-12-05 12:53:38.202600] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:55.678 [2024-12-05 12:53:38.205671] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:55.678 [2024-12-05 12:53:38.205734] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:55.678 [2024-12-05 12:53:38.205777] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:55.678 [2024-12-05 12:53:38.205788] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:21:55.678 { 00:21:55.678 "results": [ 00:21:55.678 { 00:21:55.678 "job": "raid_bdev1", 00:21:55.678 "core_mask": "0x1", 00:21:55.678 "workload": "randrw", 00:21:55.678 "percentage": 50, 00:21:55.678 "status": "finished", 00:21:55.678 "queue_depth": 1, 00:21:55.678 "io_size": 131072, 00:21:55.678 "runtime": 1.236159, 00:21:55.678 "iops": 14278.90748682006, 00:21:55.678 "mibps": 1784.8634358525076, 00:21:55.678 "io_failed": 1, 00:21:55.678 "io_timeout": 0, 00:21:55.678 "avg_latency_us": 95.60295769492234, 00:21:55.678 "min_latency_us": 34.26461538461538, 00:21:55.678 "max_latency_us": 1714.0184615384615 00:21:55.678 } 00:21:55.678 ], 00:21:55.678 "core_count": 1 00:21:55.678 } 00:21:55.678 12:53:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.678 12:53:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69161 00:21:55.678 12:53:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69161 ']' 00:21:55.678 12:53:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69161 00:21:55.678 12:53:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:21:55.678 12:53:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:55.678 12:53:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69161 00:21:55.678 killing process with pid 69161 00:21:55.678 12:53:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:55.678 12:53:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:55.678 12:53:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69161' 00:21:55.678 12:53:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69161 00:21:55.678 [2024-12-05 12:53:38.230688] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:55.678 12:53:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69161 00:21:55.938 [2024-12-05 12:53:38.430370] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:57.055 12:53:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.XY5cx4tE2A 00:21:57.055 12:53:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:21:57.055 12:53:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:21:57.055 12:53:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.81 00:21:57.055 12:53:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:21:57.055 12:53:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:57.055 12:53:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:21:57.055 12:53:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.81 != \0\.\0\0 ]] 00:21:57.055 00:21:57.055 real 0m3.789s 00:21:57.055 user 0m4.506s 00:21:57.055 sys 0m0.403s 00:21:57.055 12:53:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:57.055 12:53:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.055 ************************************ 00:21:57.055 END TEST raid_write_error_test 00:21:57.055 ************************************ 00:21:57.055 12:53:39 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:21:57.055 12:53:39 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:21:57.055 12:53:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:57.055 12:53:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:57.055 12:53:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:57.055 ************************************ 00:21:57.055 START TEST raid_state_function_test 00:21:57.055 ************************************ 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69299 00:21:57.055 Process raid pid: 69299 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69299' 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69299 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69299 ']' 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:57.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:57.055 12:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.055 [2024-12-05 12:53:39.299682] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:21:57.055 [2024-12-05 12:53:39.299807] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:57.055 [2024-12-05 12:53:39.461265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.055 [2024-12-05 12:53:39.560645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.317 [2024-12-05 12:53:39.696864] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:57.317 [2024-12-05 12:53:39.696898] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:57.578 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:57.578 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:21:57.578 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:57.578 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.578 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.578 [2024-12-05 12:53:40.119644] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:57.578 [2024-12-05 12:53:40.119700] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:57.578 [2024-12-05 12:53:40.119717] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:57.578 [2024-12-05 12:53:40.119727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:57.578 [2024-12-05 12:53:40.119733] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:57.578 [2024-12-05 12:53:40.119742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:57.578 [2024-12-05 12:53:40.119748] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:57.578 [2024-12-05 12:53:40.119756] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:57.578 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.578 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:57.578 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:57.578 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:57.578 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:57.578 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:57.578 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:57.578 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:57.578 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:57.578 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:57.578 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:57.578 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.578 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:57.578 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.578 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.578 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.837 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:57.837 "name": "Existed_Raid", 00:21:57.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.837 "strip_size_kb": 64, 00:21:57.837 "state": "configuring", 00:21:57.837 "raid_level": "concat", 00:21:57.837 "superblock": false, 00:21:57.837 "num_base_bdevs": 4, 00:21:57.837 "num_base_bdevs_discovered": 0, 00:21:57.837 "num_base_bdevs_operational": 4, 00:21:57.837 "base_bdevs_list": [ 00:21:57.837 { 00:21:57.837 "name": "BaseBdev1", 00:21:57.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.837 "is_configured": false, 00:21:57.837 "data_offset": 0, 00:21:57.837 "data_size": 0 00:21:57.837 }, 00:21:57.837 { 00:21:57.837 "name": "BaseBdev2", 00:21:57.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.837 "is_configured": false, 00:21:57.837 "data_offset": 0, 00:21:57.837 "data_size": 0 00:21:57.837 }, 00:21:57.837 { 00:21:57.837 "name": "BaseBdev3", 00:21:57.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.837 "is_configured": false, 00:21:57.837 "data_offset": 0, 00:21:57.837 "data_size": 0 00:21:57.837 }, 00:21:57.837 { 00:21:57.837 "name": "BaseBdev4", 00:21:57.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.837 "is_configured": false, 00:21:57.837 "data_offset": 0, 00:21:57.837 "data_size": 0 00:21:57.837 } 00:21:57.837 ] 00:21:57.837 }' 00:21:57.837 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:57.837 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.098 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:58.098 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.098 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.098 [2024-12-05 12:53:40.451658] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:58.098 [2024-12-05 12:53:40.451694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:58.098 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.098 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:58.098 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.098 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.098 [2024-12-05 12:53:40.459674] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:58.098 [2024-12-05 12:53:40.459720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:58.098 [2024-12-05 12:53:40.459728] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:58.098 [2024-12-05 12:53:40.459737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:58.098 [2024-12-05 12:53:40.459744] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:58.098 [2024-12-05 12:53:40.459752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:58.098 [2024-12-05 12:53:40.459758] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:58.098 [2024-12-05 12:53:40.459766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:58.098 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.098 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:58.098 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.098 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.098 [2024-12-05 12:53:40.492093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:58.098 BaseBdev1 00:21:58.098 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.098 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:58.098 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:58.098 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:58.098 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:58.098 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:58.098 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:58.098 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:58.098 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.098 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.098 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.098 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:58.098 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.098 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.098 [ 00:21:58.098 { 00:21:58.098 "name": "BaseBdev1", 00:21:58.098 "aliases": [ 00:21:58.098 "f8a0678b-9c34-4809-b9fb-9d2fa7134a24" 00:21:58.098 ], 00:21:58.098 "product_name": "Malloc disk", 00:21:58.098 "block_size": 512, 00:21:58.098 "num_blocks": 65536, 00:21:58.098 "uuid": "f8a0678b-9c34-4809-b9fb-9d2fa7134a24", 00:21:58.098 "assigned_rate_limits": { 00:21:58.098 "rw_ios_per_sec": 0, 00:21:58.098 "rw_mbytes_per_sec": 0, 00:21:58.098 "r_mbytes_per_sec": 0, 00:21:58.098 "w_mbytes_per_sec": 0 00:21:58.098 }, 00:21:58.098 "claimed": true, 00:21:58.098 "claim_type": "exclusive_write", 00:21:58.098 "zoned": false, 00:21:58.098 "supported_io_types": { 00:21:58.098 "read": true, 00:21:58.098 "write": true, 00:21:58.098 "unmap": true, 00:21:58.098 "flush": true, 00:21:58.098 "reset": true, 00:21:58.098 "nvme_admin": false, 00:21:58.098 "nvme_io": false, 00:21:58.098 "nvme_io_md": false, 00:21:58.098 "write_zeroes": true, 00:21:58.098 "zcopy": true, 00:21:58.098 "get_zone_info": false, 00:21:58.098 "zone_management": false, 00:21:58.098 "zone_append": false, 00:21:58.098 "compare": false, 00:21:58.098 "compare_and_write": false, 00:21:58.098 "abort": true, 00:21:58.098 "seek_hole": false, 00:21:58.098 "seek_data": false, 00:21:58.098 "copy": true, 00:21:58.098 "nvme_iov_md": false 00:21:58.098 }, 00:21:58.098 "memory_domains": [ 00:21:58.098 { 00:21:58.098 "dma_device_id": "system", 00:21:58.098 "dma_device_type": 1 00:21:58.098 }, 00:21:58.098 { 00:21:58.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:58.098 "dma_device_type": 2 00:21:58.098 } 00:21:58.098 ], 00:21:58.098 "driver_specific": {} 00:21:58.098 } 00:21:58.098 ] 00:21:58.098 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.098 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:58.098 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:58.098 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:58.098 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:58.098 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:58.098 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:58.098 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:58.098 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:58.098 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:58.098 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:58.098 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:58.098 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:58.099 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.099 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.099 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:58.099 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.099 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:58.099 "name": "Existed_Raid", 00:21:58.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.099 "strip_size_kb": 64, 00:21:58.099 "state": "configuring", 00:21:58.099 "raid_level": "concat", 00:21:58.099 "superblock": false, 00:21:58.099 "num_base_bdevs": 4, 00:21:58.099 "num_base_bdevs_discovered": 1, 00:21:58.099 "num_base_bdevs_operational": 4, 00:21:58.099 "base_bdevs_list": [ 00:21:58.099 { 00:21:58.099 "name": "BaseBdev1", 00:21:58.099 "uuid": "f8a0678b-9c34-4809-b9fb-9d2fa7134a24", 00:21:58.099 "is_configured": true, 00:21:58.099 "data_offset": 0, 00:21:58.099 "data_size": 65536 00:21:58.099 }, 00:21:58.099 { 00:21:58.099 "name": "BaseBdev2", 00:21:58.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.099 "is_configured": false, 00:21:58.099 "data_offset": 0, 00:21:58.099 "data_size": 0 00:21:58.099 }, 00:21:58.099 { 00:21:58.099 "name": "BaseBdev3", 00:21:58.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.099 "is_configured": false, 00:21:58.099 "data_offset": 0, 00:21:58.099 "data_size": 0 00:21:58.099 }, 00:21:58.099 { 00:21:58.099 "name": "BaseBdev4", 00:21:58.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.099 "is_configured": false, 00:21:58.099 "data_offset": 0, 00:21:58.099 "data_size": 0 00:21:58.099 } 00:21:58.099 ] 00:21:58.099 }' 00:21:58.099 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:58.099 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.359 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:58.359 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.359 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.359 [2024-12-05 12:53:40.852195] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:58.359 [2024-12-05 12:53:40.852238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:58.359 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.359 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:58.359 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.359 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.359 [2024-12-05 12:53:40.864253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:58.359 [2024-12-05 12:53:40.865786] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:58.359 [2024-12-05 12:53:40.865823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:58.359 [2024-12-05 12:53:40.865831] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:58.359 [2024-12-05 12:53:40.865839] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:58.359 [2024-12-05 12:53:40.865845] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:58.359 [2024-12-05 12:53:40.865852] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:58.359 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.359 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:58.359 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:58.359 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:58.359 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:58.359 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:58.359 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:58.359 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:58.359 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:58.359 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:58.359 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:58.359 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:58.359 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:58.359 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:58.359 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:58.359 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.359 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.359 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.359 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:58.359 "name": "Existed_Raid", 00:21:58.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.359 "strip_size_kb": 64, 00:21:58.359 "state": "configuring", 00:21:58.359 "raid_level": "concat", 00:21:58.359 "superblock": false, 00:21:58.359 "num_base_bdevs": 4, 00:21:58.359 "num_base_bdevs_discovered": 1, 00:21:58.359 "num_base_bdevs_operational": 4, 00:21:58.359 "base_bdevs_list": [ 00:21:58.359 { 00:21:58.359 "name": "BaseBdev1", 00:21:58.359 "uuid": "f8a0678b-9c34-4809-b9fb-9d2fa7134a24", 00:21:58.359 "is_configured": true, 00:21:58.359 "data_offset": 0, 00:21:58.359 "data_size": 65536 00:21:58.359 }, 00:21:58.359 { 00:21:58.359 "name": "BaseBdev2", 00:21:58.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.359 "is_configured": false, 00:21:58.359 "data_offset": 0, 00:21:58.359 "data_size": 0 00:21:58.359 }, 00:21:58.359 { 00:21:58.359 "name": "BaseBdev3", 00:21:58.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.359 "is_configured": false, 00:21:58.359 "data_offset": 0, 00:21:58.359 "data_size": 0 00:21:58.359 }, 00:21:58.359 { 00:21:58.359 "name": "BaseBdev4", 00:21:58.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.359 "is_configured": false, 00:21:58.359 "data_offset": 0, 00:21:58.359 "data_size": 0 00:21:58.359 } 00:21:58.359 ] 00:21:58.359 }' 00:21:58.359 12:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:58.359 12:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.619 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:58.619 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.619 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.619 [2024-12-05 12:53:41.190808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:58.619 BaseBdev2 00:21:58.619 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.619 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:58.619 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:58.619 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:58.619 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:58.619 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:58.619 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:58.619 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:58.619 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.619 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.619 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.619 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:58.880 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.880 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.880 [ 00:21:58.880 { 00:21:58.880 "name": "BaseBdev2", 00:21:58.880 "aliases": [ 00:21:58.880 "d739f635-6c72-40f3-881d-3d284bf8f6b1" 00:21:58.880 ], 00:21:58.880 "product_name": "Malloc disk", 00:21:58.880 "block_size": 512, 00:21:58.880 "num_blocks": 65536, 00:21:58.880 "uuid": "d739f635-6c72-40f3-881d-3d284bf8f6b1", 00:21:58.880 "assigned_rate_limits": { 00:21:58.880 "rw_ios_per_sec": 0, 00:21:58.880 "rw_mbytes_per_sec": 0, 00:21:58.880 "r_mbytes_per_sec": 0, 00:21:58.880 "w_mbytes_per_sec": 0 00:21:58.880 }, 00:21:58.880 "claimed": true, 00:21:58.880 "claim_type": "exclusive_write", 00:21:58.880 "zoned": false, 00:21:58.880 "supported_io_types": { 00:21:58.880 "read": true, 00:21:58.880 "write": true, 00:21:58.880 "unmap": true, 00:21:58.880 "flush": true, 00:21:58.880 "reset": true, 00:21:58.880 "nvme_admin": false, 00:21:58.880 "nvme_io": false, 00:21:58.880 "nvme_io_md": false, 00:21:58.880 "write_zeroes": true, 00:21:58.880 "zcopy": true, 00:21:58.880 "get_zone_info": false, 00:21:58.880 "zone_management": false, 00:21:58.880 "zone_append": false, 00:21:58.880 "compare": false, 00:21:58.880 "compare_and_write": false, 00:21:58.880 "abort": true, 00:21:58.880 "seek_hole": false, 00:21:58.881 "seek_data": false, 00:21:58.881 "copy": true, 00:21:58.881 "nvme_iov_md": false 00:21:58.881 }, 00:21:58.881 "memory_domains": [ 00:21:58.881 { 00:21:58.881 "dma_device_id": "system", 00:21:58.881 "dma_device_type": 1 00:21:58.881 }, 00:21:58.881 { 00:21:58.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:58.881 "dma_device_type": 2 00:21:58.881 } 00:21:58.881 ], 00:21:58.881 "driver_specific": {} 00:21:58.881 } 00:21:58.881 ] 00:21:58.881 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.881 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:58.881 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:58.881 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:58.881 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:58.881 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:58.881 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:58.881 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:58.881 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:58.881 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:58.881 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:58.881 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:58.881 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:58.881 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:58.881 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:58.881 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:58.881 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.881 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.881 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.881 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:58.881 "name": "Existed_Raid", 00:21:58.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.881 "strip_size_kb": 64, 00:21:58.881 "state": "configuring", 00:21:58.881 "raid_level": "concat", 00:21:58.881 "superblock": false, 00:21:58.881 "num_base_bdevs": 4, 00:21:58.881 "num_base_bdevs_discovered": 2, 00:21:58.881 "num_base_bdevs_operational": 4, 00:21:58.881 "base_bdevs_list": [ 00:21:58.881 { 00:21:58.881 "name": "BaseBdev1", 00:21:58.881 "uuid": "f8a0678b-9c34-4809-b9fb-9d2fa7134a24", 00:21:58.881 "is_configured": true, 00:21:58.881 "data_offset": 0, 00:21:58.881 "data_size": 65536 00:21:58.881 }, 00:21:58.881 { 00:21:58.881 "name": "BaseBdev2", 00:21:58.881 "uuid": "d739f635-6c72-40f3-881d-3d284bf8f6b1", 00:21:58.881 "is_configured": true, 00:21:58.881 "data_offset": 0, 00:21:58.881 "data_size": 65536 00:21:58.881 }, 00:21:58.881 { 00:21:58.881 "name": "BaseBdev3", 00:21:58.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.881 "is_configured": false, 00:21:58.881 "data_offset": 0, 00:21:58.881 "data_size": 0 00:21:58.881 }, 00:21:58.881 { 00:21:58.881 "name": "BaseBdev4", 00:21:58.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.881 "is_configured": false, 00:21:58.881 "data_offset": 0, 00:21:58.881 "data_size": 0 00:21:58.881 } 00:21:58.881 ] 00:21:58.881 }' 00:21:58.881 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:58.881 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.198 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:59.198 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.198 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.198 [2024-12-05 12:53:41.553582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:59.198 BaseBdev3 00:21:59.198 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.198 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:21:59.198 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:59.198 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:59.198 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:59.198 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:59.198 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:59.199 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:59.199 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.199 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.199 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.199 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:59.199 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.199 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.199 [ 00:21:59.199 { 00:21:59.199 "name": "BaseBdev3", 00:21:59.199 "aliases": [ 00:21:59.199 "92a05b3f-a4c8-4167-a687-87a58a6d95b6" 00:21:59.199 ], 00:21:59.199 "product_name": "Malloc disk", 00:21:59.199 "block_size": 512, 00:21:59.199 "num_blocks": 65536, 00:21:59.199 "uuid": "92a05b3f-a4c8-4167-a687-87a58a6d95b6", 00:21:59.199 "assigned_rate_limits": { 00:21:59.199 "rw_ios_per_sec": 0, 00:21:59.199 "rw_mbytes_per_sec": 0, 00:21:59.199 "r_mbytes_per_sec": 0, 00:21:59.199 "w_mbytes_per_sec": 0 00:21:59.199 }, 00:21:59.199 "claimed": true, 00:21:59.199 "claim_type": "exclusive_write", 00:21:59.199 "zoned": false, 00:21:59.199 "supported_io_types": { 00:21:59.199 "read": true, 00:21:59.199 "write": true, 00:21:59.199 "unmap": true, 00:21:59.199 "flush": true, 00:21:59.199 "reset": true, 00:21:59.199 "nvme_admin": false, 00:21:59.199 "nvme_io": false, 00:21:59.199 "nvme_io_md": false, 00:21:59.199 "write_zeroes": true, 00:21:59.199 "zcopy": true, 00:21:59.199 "get_zone_info": false, 00:21:59.199 "zone_management": false, 00:21:59.199 "zone_append": false, 00:21:59.199 "compare": false, 00:21:59.199 "compare_and_write": false, 00:21:59.199 "abort": true, 00:21:59.199 "seek_hole": false, 00:21:59.199 "seek_data": false, 00:21:59.199 "copy": true, 00:21:59.199 "nvme_iov_md": false 00:21:59.199 }, 00:21:59.199 "memory_domains": [ 00:21:59.199 { 00:21:59.199 "dma_device_id": "system", 00:21:59.199 "dma_device_type": 1 00:21:59.199 }, 00:21:59.199 { 00:21:59.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:59.199 "dma_device_type": 2 00:21:59.199 } 00:21:59.199 ], 00:21:59.199 "driver_specific": {} 00:21:59.199 } 00:21:59.199 ] 00:21:59.199 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.199 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:59.199 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:59.199 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:59.199 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:59.199 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:59.199 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:59.199 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:59.199 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:59.199 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:59.199 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:59.199 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:59.199 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:59.199 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:59.199 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.199 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.199 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.199 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:59.199 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.199 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:59.199 "name": "Existed_Raid", 00:21:59.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.199 "strip_size_kb": 64, 00:21:59.199 "state": "configuring", 00:21:59.199 "raid_level": "concat", 00:21:59.199 "superblock": false, 00:21:59.199 "num_base_bdevs": 4, 00:21:59.199 "num_base_bdevs_discovered": 3, 00:21:59.199 "num_base_bdevs_operational": 4, 00:21:59.199 "base_bdevs_list": [ 00:21:59.199 { 00:21:59.199 "name": "BaseBdev1", 00:21:59.199 "uuid": "f8a0678b-9c34-4809-b9fb-9d2fa7134a24", 00:21:59.199 "is_configured": true, 00:21:59.199 "data_offset": 0, 00:21:59.199 "data_size": 65536 00:21:59.199 }, 00:21:59.199 { 00:21:59.199 "name": "BaseBdev2", 00:21:59.199 "uuid": "d739f635-6c72-40f3-881d-3d284bf8f6b1", 00:21:59.199 "is_configured": true, 00:21:59.199 "data_offset": 0, 00:21:59.199 "data_size": 65536 00:21:59.199 }, 00:21:59.199 { 00:21:59.199 "name": "BaseBdev3", 00:21:59.199 "uuid": "92a05b3f-a4c8-4167-a687-87a58a6d95b6", 00:21:59.199 "is_configured": true, 00:21:59.199 "data_offset": 0, 00:21:59.199 "data_size": 65536 00:21:59.199 }, 00:21:59.199 { 00:21:59.199 "name": "BaseBdev4", 00:21:59.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.199 "is_configured": false, 00:21:59.199 "data_offset": 0, 00:21:59.199 "data_size": 0 00:21:59.199 } 00:21:59.199 ] 00:21:59.199 }' 00:21:59.199 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:59.199 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.473 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:59.473 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.473 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.473 [2024-12-05 12:53:41.928297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:59.473 [2024-12-05 12:53:41.928339] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:59.474 [2024-12-05 12:53:41.928346] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:21:59.474 [2024-12-05 12:53:41.928576] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:59.474 [2024-12-05 12:53:41.928701] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:59.474 [2024-12-05 12:53:41.928716] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:59.474 [2024-12-05 12:53:41.928908] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:59.474 BaseBdev4 00:21:59.474 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.474 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:21:59.474 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:21:59.474 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:59.474 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:59.474 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:59.474 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:59.474 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:59.474 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.474 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.474 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.474 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:59.474 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.474 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.474 [ 00:21:59.474 { 00:21:59.474 "name": "BaseBdev4", 00:21:59.474 "aliases": [ 00:21:59.474 "7bf23822-4f37-4357-9680-33a8e3c5b046" 00:21:59.474 ], 00:21:59.474 "product_name": "Malloc disk", 00:21:59.474 "block_size": 512, 00:21:59.474 "num_blocks": 65536, 00:21:59.474 "uuid": "7bf23822-4f37-4357-9680-33a8e3c5b046", 00:21:59.474 "assigned_rate_limits": { 00:21:59.474 "rw_ios_per_sec": 0, 00:21:59.474 "rw_mbytes_per_sec": 0, 00:21:59.474 "r_mbytes_per_sec": 0, 00:21:59.474 "w_mbytes_per_sec": 0 00:21:59.474 }, 00:21:59.474 "claimed": true, 00:21:59.474 "claim_type": "exclusive_write", 00:21:59.474 "zoned": false, 00:21:59.474 "supported_io_types": { 00:21:59.474 "read": true, 00:21:59.474 "write": true, 00:21:59.474 "unmap": true, 00:21:59.474 "flush": true, 00:21:59.474 "reset": true, 00:21:59.474 "nvme_admin": false, 00:21:59.474 "nvme_io": false, 00:21:59.474 "nvme_io_md": false, 00:21:59.474 "write_zeroes": true, 00:21:59.474 "zcopy": true, 00:21:59.474 "get_zone_info": false, 00:21:59.474 "zone_management": false, 00:21:59.474 "zone_append": false, 00:21:59.474 "compare": false, 00:21:59.474 "compare_and_write": false, 00:21:59.474 "abort": true, 00:21:59.474 "seek_hole": false, 00:21:59.474 "seek_data": false, 00:21:59.474 "copy": true, 00:21:59.474 "nvme_iov_md": false 00:21:59.474 }, 00:21:59.474 "memory_domains": [ 00:21:59.474 { 00:21:59.474 "dma_device_id": "system", 00:21:59.474 "dma_device_type": 1 00:21:59.474 }, 00:21:59.474 { 00:21:59.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:59.474 "dma_device_type": 2 00:21:59.474 } 00:21:59.474 ], 00:21:59.474 "driver_specific": {} 00:21:59.474 } 00:21:59.474 ] 00:21:59.474 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.474 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:59.474 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:59.474 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:59.474 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:21:59.474 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:59.474 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:59.474 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:59.474 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:59.474 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:59.474 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:59.474 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:59.474 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:59.474 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:59.474 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.474 12:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:59.474 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.474 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.474 12:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.474 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:59.474 "name": "Existed_Raid", 00:21:59.474 "uuid": "f857adb3-fe52-45fe-9cb7-0b378e7170e8", 00:21:59.474 "strip_size_kb": 64, 00:21:59.474 "state": "online", 00:21:59.474 "raid_level": "concat", 00:21:59.474 "superblock": false, 00:21:59.474 "num_base_bdevs": 4, 00:21:59.474 "num_base_bdevs_discovered": 4, 00:21:59.474 "num_base_bdevs_operational": 4, 00:21:59.474 "base_bdevs_list": [ 00:21:59.474 { 00:21:59.474 "name": "BaseBdev1", 00:21:59.474 "uuid": "f8a0678b-9c34-4809-b9fb-9d2fa7134a24", 00:21:59.474 "is_configured": true, 00:21:59.474 "data_offset": 0, 00:21:59.474 "data_size": 65536 00:21:59.474 }, 00:21:59.474 { 00:21:59.474 "name": "BaseBdev2", 00:21:59.474 "uuid": "d739f635-6c72-40f3-881d-3d284bf8f6b1", 00:21:59.474 "is_configured": true, 00:21:59.474 "data_offset": 0, 00:21:59.474 "data_size": 65536 00:21:59.474 }, 00:21:59.474 { 00:21:59.474 "name": "BaseBdev3", 00:21:59.474 "uuid": "92a05b3f-a4c8-4167-a687-87a58a6d95b6", 00:21:59.474 "is_configured": true, 00:21:59.474 "data_offset": 0, 00:21:59.474 "data_size": 65536 00:21:59.474 }, 00:21:59.474 { 00:21:59.474 "name": "BaseBdev4", 00:21:59.474 "uuid": "7bf23822-4f37-4357-9680-33a8e3c5b046", 00:21:59.474 "is_configured": true, 00:21:59.474 "data_offset": 0, 00:21:59.474 "data_size": 65536 00:21:59.474 } 00:21:59.474 ] 00:21:59.474 }' 00:21:59.474 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:59.474 12:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.736 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:59.736 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:59.736 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:59.736 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:59.736 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:59.736 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:59.737 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:59.737 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:59.737 12:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.737 12:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.737 [2024-12-05 12:53:42.304735] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:59.737 12:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.998 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:59.998 "name": "Existed_Raid", 00:21:59.998 "aliases": [ 00:21:59.998 "f857adb3-fe52-45fe-9cb7-0b378e7170e8" 00:21:59.998 ], 00:21:59.998 "product_name": "Raid Volume", 00:21:59.998 "block_size": 512, 00:21:59.998 "num_blocks": 262144, 00:21:59.998 "uuid": "f857adb3-fe52-45fe-9cb7-0b378e7170e8", 00:21:59.998 "assigned_rate_limits": { 00:21:59.998 "rw_ios_per_sec": 0, 00:21:59.998 "rw_mbytes_per_sec": 0, 00:21:59.998 "r_mbytes_per_sec": 0, 00:21:59.998 "w_mbytes_per_sec": 0 00:21:59.998 }, 00:21:59.998 "claimed": false, 00:21:59.998 "zoned": false, 00:21:59.998 "supported_io_types": { 00:21:59.998 "read": true, 00:21:59.998 "write": true, 00:21:59.998 "unmap": true, 00:21:59.998 "flush": true, 00:21:59.998 "reset": true, 00:21:59.998 "nvme_admin": false, 00:21:59.998 "nvme_io": false, 00:21:59.998 "nvme_io_md": false, 00:21:59.998 "write_zeroes": true, 00:21:59.998 "zcopy": false, 00:21:59.998 "get_zone_info": false, 00:21:59.998 "zone_management": false, 00:21:59.998 "zone_append": false, 00:21:59.998 "compare": false, 00:21:59.998 "compare_and_write": false, 00:21:59.998 "abort": false, 00:21:59.998 "seek_hole": false, 00:21:59.998 "seek_data": false, 00:21:59.998 "copy": false, 00:21:59.998 "nvme_iov_md": false 00:21:59.998 }, 00:21:59.998 "memory_domains": [ 00:21:59.998 { 00:21:59.998 "dma_device_id": "system", 00:21:59.998 "dma_device_type": 1 00:21:59.998 }, 00:21:59.998 { 00:21:59.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:59.998 "dma_device_type": 2 00:21:59.998 }, 00:21:59.998 { 00:21:59.998 "dma_device_id": "system", 00:21:59.998 "dma_device_type": 1 00:21:59.999 }, 00:21:59.999 { 00:21:59.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:59.999 "dma_device_type": 2 00:21:59.999 }, 00:21:59.999 { 00:21:59.999 "dma_device_id": "system", 00:21:59.999 "dma_device_type": 1 00:21:59.999 }, 00:21:59.999 { 00:21:59.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:59.999 "dma_device_type": 2 00:21:59.999 }, 00:21:59.999 { 00:21:59.999 "dma_device_id": "system", 00:21:59.999 "dma_device_type": 1 00:21:59.999 }, 00:21:59.999 { 00:21:59.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:59.999 "dma_device_type": 2 00:21:59.999 } 00:21:59.999 ], 00:21:59.999 "driver_specific": { 00:21:59.999 "raid": { 00:21:59.999 "uuid": "f857adb3-fe52-45fe-9cb7-0b378e7170e8", 00:21:59.999 "strip_size_kb": 64, 00:21:59.999 "state": "online", 00:21:59.999 "raid_level": "concat", 00:21:59.999 "superblock": false, 00:21:59.999 "num_base_bdevs": 4, 00:21:59.999 "num_base_bdevs_discovered": 4, 00:21:59.999 "num_base_bdevs_operational": 4, 00:21:59.999 "base_bdevs_list": [ 00:21:59.999 { 00:21:59.999 "name": "BaseBdev1", 00:21:59.999 "uuid": "f8a0678b-9c34-4809-b9fb-9d2fa7134a24", 00:21:59.999 "is_configured": true, 00:21:59.999 "data_offset": 0, 00:21:59.999 "data_size": 65536 00:21:59.999 }, 00:21:59.999 { 00:21:59.999 "name": "BaseBdev2", 00:21:59.999 "uuid": "d739f635-6c72-40f3-881d-3d284bf8f6b1", 00:21:59.999 "is_configured": true, 00:21:59.999 "data_offset": 0, 00:21:59.999 "data_size": 65536 00:21:59.999 }, 00:21:59.999 { 00:21:59.999 "name": "BaseBdev3", 00:21:59.999 "uuid": "92a05b3f-a4c8-4167-a687-87a58a6d95b6", 00:21:59.999 "is_configured": true, 00:21:59.999 "data_offset": 0, 00:21:59.999 "data_size": 65536 00:21:59.999 }, 00:21:59.999 { 00:21:59.999 "name": "BaseBdev4", 00:21:59.999 "uuid": "7bf23822-4f37-4357-9680-33a8e3c5b046", 00:21:59.999 "is_configured": true, 00:21:59.999 "data_offset": 0, 00:21:59.999 "data_size": 65536 00:21:59.999 } 00:21:59.999 ] 00:21:59.999 } 00:21:59.999 } 00:21:59.999 }' 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:59.999 BaseBdev2 00:21:59.999 BaseBdev3 00:21:59.999 BaseBdev4' 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.999 [2024-12-05 12:53:42.524514] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:59.999 [2024-12-05 12:53:42.524540] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:59.999 [2024-12-05 12:53:42.524580] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:59.999 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:00.000 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:00.000 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:00.000 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:00.000 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.000 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:00.000 12:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.000 12:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.260 12:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.260 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:00.260 "name": "Existed_Raid", 00:22:00.260 "uuid": "f857adb3-fe52-45fe-9cb7-0b378e7170e8", 00:22:00.260 "strip_size_kb": 64, 00:22:00.260 "state": "offline", 00:22:00.260 "raid_level": "concat", 00:22:00.260 "superblock": false, 00:22:00.260 "num_base_bdevs": 4, 00:22:00.260 "num_base_bdevs_discovered": 3, 00:22:00.260 "num_base_bdevs_operational": 3, 00:22:00.260 "base_bdevs_list": [ 00:22:00.260 { 00:22:00.260 "name": null, 00:22:00.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.260 "is_configured": false, 00:22:00.260 "data_offset": 0, 00:22:00.260 "data_size": 65536 00:22:00.260 }, 00:22:00.260 { 00:22:00.260 "name": "BaseBdev2", 00:22:00.260 "uuid": "d739f635-6c72-40f3-881d-3d284bf8f6b1", 00:22:00.260 "is_configured": true, 00:22:00.260 "data_offset": 0, 00:22:00.260 "data_size": 65536 00:22:00.260 }, 00:22:00.260 { 00:22:00.260 "name": "BaseBdev3", 00:22:00.260 "uuid": "92a05b3f-a4c8-4167-a687-87a58a6d95b6", 00:22:00.260 "is_configured": true, 00:22:00.260 "data_offset": 0, 00:22:00.260 "data_size": 65536 00:22:00.260 }, 00:22:00.260 { 00:22:00.260 "name": "BaseBdev4", 00:22:00.260 "uuid": "7bf23822-4f37-4357-9680-33a8e3c5b046", 00:22:00.260 "is_configured": true, 00:22:00.260 "data_offset": 0, 00:22:00.260 "data_size": 65536 00:22:00.260 } 00:22:00.260 ] 00:22:00.260 }' 00:22:00.260 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:00.260 12:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.522 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:00.522 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:00.522 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:00.522 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.522 12:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.522 12:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.522 12:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.522 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:00.522 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:00.522 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:00.522 12:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.522 12:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.522 [2024-12-05 12:53:42.926784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:00.522 12:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.522 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:00.522 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:00.522 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.522 12:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.522 12:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.522 12:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:00.522 12:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.522 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:00.522 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:00.522 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:22:00.522 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.522 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.522 [2024-12-05 12:53:43.013575] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:00.522 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.522 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:00.522 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:00.522 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.522 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:00.522 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.522 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.522 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.522 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:00.522 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:00.522 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:22:00.522 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.522 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.522 [2024-12-05 12:53:43.099547] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:00.522 [2024-12-05 12:53:43.099588] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:00.783 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.783 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:00.783 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:00.783 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.783 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:00.783 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.783 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.783 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.783 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:00.783 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:00.783 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:22:00.783 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:22:00.783 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:00.783 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:00.783 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.783 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.783 BaseBdev2 00:22:00.783 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.783 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:22:00.783 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:00.783 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:00.783 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:00.783 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:00.783 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:00.783 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:00.783 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.783 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.783 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.783 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:00.783 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.783 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.783 [ 00:22:00.783 { 00:22:00.783 "name": "BaseBdev2", 00:22:00.783 "aliases": [ 00:22:00.783 "6a548df3-6350-4179-888c-8c99684b38ee" 00:22:00.783 ], 00:22:00.783 "product_name": "Malloc disk", 00:22:00.783 "block_size": 512, 00:22:00.783 "num_blocks": 65536, 00:22:00.783 "uuid": "6a548df3-6350-4179-888c-8c99684b38ee", 00:22:00.783 "assigned_rate_limits": { 00:22:00.783 "rw_ios_per_sec": 0, 00:22:00.783 "rw_mbytes_per_sec": 0, 00:22:00.783 "r_mbytes_per_sec": 0, 00:22:00.783 "w_mbytes_per_sec": 0 00:22:00.783 }, 00:22:00.783 "claimed": false, 00:22:00.783 "zoned": false, 00:22:00.783 "supported_io_types": { 00:22:00.783 "read": true, 00:22:00.783 "write": true, 00:22:00.783 "unmap": true, 00:22:00.783 "flush": true, 00:22:00.783 "reset": true, 00:22:00.783 "nvme_admin": false, 00:22:00.783 "nvme_io": false, 00:22:00.783 "nvme_io_md": false, 00:22:00.783 "write_zeroes": true, 00:22:00.783 "zcopy": true, 00:22:00.783 "get_zone_info": false, 00:22:00.783 "zone_management": false, 00:22:00.783 "zone_append": false, 00:22:00.783 "compare": false, 00:22:00.783 "compare_and_write": false, 00:22:00.783 "abort": true, 00:22:00.783 "seek_hole": false, 00:22:00.783 "seek_data": false, 00:22:00.783 "copy": true, 00:22:00.783 "nvme_iov_md": false 00:22:00.783 }, 00:22:00.783 "memory_domains": [ 00:22:00.783 { 00:22:00.783 "dma_device_id": "system", 00:22:00.783 "dma_device_type": 1 00:22:00.783 }, 00:22:00.783 { 00:22:00.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:00.783 "dma_device_type": 2 00:22:00.783 } 00:22:00.783 ], 00:22:00.783 "driver_specific": {} 00:22:00.783 } 00:22:00.783 ] 00:22:00.783 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.783 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:00.783 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:00.783 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:00.783 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.784 BaseBdev3 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.784 [ 00:22:00.784 { 00:22:00.784 "name": "BaseBdev3", 00:22:00.784 "aliases": [ 00:22:00.784 "466ed25e-ddce-46b3-aa6c-8c7c8c255b48" 00:22:00.784 ], 00:22:00.784 "product_name": "Malloc disk", 00:22:00.784 "block_size": 512, 00:22:00.784 "num_blocks": 65536, 00:22:00.784 "uuid": "466ed25e-ddce-46b3-aa6c-8c7c8c255b48", 00:22:00.784 "assigned_rate_limits": { 00:22:00.784 "rw_ios_per_sec": 0, 00:22:00.784 "rw_mbytes_per_sec": 0, 00:22:00.784 "r_mbytes_per_sec": 0, 00:22:00.784 "w_mbytes_per_sec": 0 00:22:00.784 }, 00:22:00.784 "claimed": false, 00:22:00.784 "zoned": false, 00:22:00.784 "supported_io_types": { 00:22:00.784 "read": true, 00:22:00.784 "write": true, 00:22:00.784 "unmap": true, 00:22:00.784 "flush": true, 00:22:00.784 "reset": true, 00:22:00.784 "nvme_admin": false, 00:22:00.784 "nvme_io": false, 00:22:00.784 "nvme_io_md": false, 00:22:00.784 "write_zeroes": true, 00:22:00.784 "zcopy": true, 00:22:00.784 "get_zone_info": false, 00:22:00.784 "zone_management": false, 00:22:00.784 "zone_append": false, 00:22:00.784 "compare": false, 00:22:00.784 "compare_and_write": false, 00:22:00.784 "abort": true, 00:22:00.784 "seek_hole": false, 00:22:00.784 "seek_data": false, 00:22:00.784 "copy": true, 00:22:00.784 "nvme_iov_md": false 00:22:00.784 }, 00:22:00.784 "memory_domains": [ 00:22:00.784 { 00:22:00.784 "dma_device_id": "system", 00:22:00.784 "dma_device_type": 1 00:22:00.784 }, 00:22:00.784 { 00:22:00.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:00.784 "dma_device_type": 2 00:22:00.784 } 00:22:00.784 ], 00:22:00.784 "driver_specific": {} 00:22:00.784 } 00:22:00.784 ] 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.784 BaseBdev4 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.784 [ 00:22:00.784 { 00:22:00.784 "name": "BaseBdev4", 00:22:00.784 "aliases": [ 00:22:00.784 "2df4e6e0-de0b-4f07-bd0d-894a0a75525a" 00:22:00.784 ], 00:22:00.784 "product_name": "Malloc disk", 00:22:00.784 "block_size": 512, 00:22:00.784 "num_blocks": 65536, 00:22:00.784 "uuid": "2df4e6e0-de0b-4f07-bd0d-894a0a75525a", 00:22:00.784 "assigned_rate_limits": { 00:22:00.784 "rw_ios_per_sec": 0, 00:22:00.784 "rw_mbytes_per_sec": 0, 00:22:00.784 "r_mbytes_per_sec": 0, 00:22:00.784 "w_mbytes_per_sec": 0 00:22:00.784 }, 00:22:00.784 "claimed": false, 00:22:00.784 "zoned": false, 00:22:00.784 "supported_io_types": { 00:22:00.784 "read": true, 00:22:00.784 "write": true, 00:22:00.784 "unmap": true, 00:22:00.784 "flush": true, 00:22:00.784 "reset": true, 00:22:00.784 "nvme_admin": false, 00:22:00.784 "nvme_io": false, 00:22:00.784 "nvme_io_md": false, 00:22:00.784 "write_zeroes": true, 00:22:00.784 "zcopy": true, 00:22:00.784 "get_zone_info": false, 00:22:00.784 "zone_management": false, 00:22:00.784 "zone_append": false, 00:22:00.784 "compare": false, 00:22:00.784 "compare_and_write": false, 00:22:00.784 "abort": true, 00:22:00.784 "seek_hole": false, 00:22:00.784 "seek_data": false, 00:22:00.784 "copy": true, 00:22:00.784 "nvme_iov_md": false 00:22:00.784 }, 00:22:00.784 "memory_domains": [ 00:22:00.784 { 00:22:00.784 "dma_device_id": "system", 00:22:00.784 "dma_device_type": 1 00:22:00.784 }, 00:22:00.784 { 00:22:00.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:00.784 "dma_device_type": 2 00:22:00.784 } 00:22:00.784 ], 00:22:00.784 "driver_specific": {} 00:22:00.784 } 00:22:00.784 ] 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.784 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.784 [2024-12-05 12:53:43.340576] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:00.785 [2024-12-05 12:53:43.340696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:00.785 [2024-12-05 12:53:43.340759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:00.785 [2024-12-05 12:53:43.342233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:00.785 [2024-12-05 12:53:43.342341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:00.785 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.785 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:00.785 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:00.785 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:00.785 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:00.785 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:00.785 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:00.785 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:00.785 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:00.785 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:00.785 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:00.785 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:00.785 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.785 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.785 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.785 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.043 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:01.043 "name": "Existed_Raid", 00:22:01.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.043 "strip_size_kb": 64, 00:22:01.043 "state": "configuring", 00:22:01.043 "raid_level": "concat", 00:22:01.043 "superblock": false, 00:22:01.043 "num_base_bdevs": 4, 00:22:01.043 "num_base_bdevs_discovered": 3, 00:22:01.043 "num_base_bdevs_operational": 4, 00:22:01.043 "base_bdevs_list": [ 00:22:01.043 { 00:22:01.043 "name": "BaseBdev1", 00:22:01.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.043 "is_configured": false, 00:22:01.043 "data_offset": 0, 00:22:01.043 "data_size": 0 00:22:01.043 }, 00:22:01.043 { 00:22:01.043 "name": "BaseBdev2", 00:22:01.043 "uuid": "6a548df3-6350-4179-888c-8c99684b38ee", 00:22:01.043 "is_configured": true, 00:22:01.043 "data_offset": 0, 00:22:01.043 "data_size": 65536 00:22:01.043 }, 00:22:01.043 { 00:22:01.043 "name": "BaseBdev3", 00:22:01.043 "uuid": "466ed25e-ddce-46b3-aa6c-8c7c8c255b48", 00:22:01.043 "is_configured": true, 00:22:01.043 "data_offset": 0, 00:22:01.043 "data_size": 65536 00:22:01.043 }, 00:22:01.043 { 00:22:01.043 "name": "BaseBdev4", 00:22:01.043 "uuid": "2df4e6e0-de0b-4f07-bd0d-894a0a75525a", 00:22:01.043 "is_configured": true, 00:22:01.043 "data_offset": 0, 00:22:01.043 "data_size": 65536 00:22:01.043 } 00:22:01.043 ] 00:22:01.043 }' 00:22:01.043 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:01.044 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.303 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:22:01.303 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.303 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.303 [2024-12-05 12:53:43.664639] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:01.303 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.303 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:01.303 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:01.303 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:01.303 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:01.303 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:01.303 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:01.303 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:01.303 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:01.303 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:01.303 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:01.303 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.303 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.303 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.303 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:01.303 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.303 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:01.303 "name": "Existed_Raid", 00:22:01.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.303 "strip_size_kb": 64, 00:22:01.303 "state": "configuring", 00:22:01.303 "raid_level": "concat", 00:22:01.303 "superblock": false, 00:22:01.303 "num_base_bdevs": 4, 00:22:01.303 "num_base_bdevs_discovered": 2, 00:22:01.303 "num_base_bdevs_operational": 4, 00:22:01.303 "base_bdevs_list": [ 00:22:01.303 { 00:22:01.303 "name": "BaseBdev1", 00:22:01.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.303 "is_configured": false, 00:22:01.303 "data_offset": 0, 00:22:01.303 "data_size": 0 00:22:01.303 }, 00:22:01.303 { 00:22:01.303 "name": null, 00:22:01.303 "uuid": "6a548df3-6350-4179-888c-8c99684b38ee", 00:22:01.303 "is_configured": false, 00:22:01.303 "data_offset": 0, 00:22:01.303 "data_size": 65536 00:22:01.303 }, 00:22:01.303 { 00:22:01.303 "name": "BaseBdev3", 00:22:01.303 "uuid": "466ed25e-ddce-46b3-aa6c-8c7c8c255b48", 00:22:01.303 "is_configured": true, 00:22:01.303 "data_offset": 0, 00:22:01.303 "data_size": 65536 00:22:01.303 }, 00:22:01.303 { 00:22:01.303 "name": "BaseBdev4", 00:22:01.303 "uuid": "2df4e6e0-de0b-4f07-bd0d-894a0a75525a", 00:22:01.303 "is_configured": true, 00:22:01.303 "data_offset": 0, 00:22:01.303 "data_size": 65536 00:22:01.303 } 00:22:01.303 ] 00:22:01.303 }' 00:22:01.303 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:01.303 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.564 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.564 12:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.564 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.564 12:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:01.564 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.564 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:22:01.564 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:01.564 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.564 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.564 [2024-12-05 12:53:44.055195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:01.564 BaseBdev1 00:22:01.564 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.564 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:22:01.564 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:01.564 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:01.564 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:01.564 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:01.564 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:01.564 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:01.564 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.564 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.564 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.564 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:01.564 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.564 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.564 [ 00:22:01.564 { 00:22:01.564 "name": "BaseBdev1", 00:22:01.564 "aliases": [ 00:22:01.564 "4edab420-5fc1-456f-8d18-3f8ce981da54" 00:22:01.564 ], 00:22:01.564 "product_name": "Malloc disk", 00:22:01.564 "block_size": 512, 00:22:01.564 "num_blocks": 65536, 00:22:01.564 "uuid": "4edab420-5fc1-456f-8d18-3f8ce981da54", 00:22:01.564 "assigned_rate_limits": { 00:22:01.564 "rw_ios_per_sec": 0, 00:22:01.564 "rw_mbytes_per_sec": 0, 00:22:01.564 "r_mbytes_per_sec": 0, 00:22:01.564 "w_mbytes_per_sec": 0 00:22:01.564 }, 00:22:01.564 "claimed": true, 00:22:01.564 "claim_type": "exclusive_write", 00:22:01.564 "zoned": false, 00:22:01.565 "supported_io_types": { 00:22:01.565 "read": true, 00:22:01.565 "write": true, 00:22:01.565 "unmap": true, 00:22:01.565 "flush": true, 00:22:01.565 "reset": true, 00:22:01.565 "nvme_admin": false, 00:22:01.565 "nvme_io": false, 00:22:01.565 "nvme_io_md": false, 00:22:01.565 "write_zeroes": true, 00:22:01.565 "zcopy": true, 00:22:01.565 "get_zone_info": false, 00:22:01.565 "zone_management": false, 00:22:01.565 "zone_append": false, 00:22:01.565 "compare": false, 00:22:01.565 "compare_and_write": false, 00:22:01.565 "abort": true, 00:22:01.565 "seek_hole": false, 00:22:01.565 "seek_data": false, 00:22:01.565 "copy": true, 00:22:01.565 "nvme_iov_md": false 00:22:01.565 }, 00:22:01.565 "memory_domains": [ 00:22:01.565 { 00:22:01.565 "dma_device_id": "system", 00:22:01.565 "dma_device_type": 1 00:22:01.565 }, 00:22:01.565 { 00:22:01.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:01.565 "dma_device_type": 2 00:22:01.565 } 00:22:01.565 ], 00:22:01.565 "driver_specific": {} 00:22:01.565 } 00:22:01.565 ] 00:22:01.565 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.565 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:01.565 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:01.565 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:01.565 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:01.565 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:01.565 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:01.565 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:01.565 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:01.565 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:01.565 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:01.565 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:01.565 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:01.565 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.565 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.565 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.565 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.565 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:01.565 "name": "Existed_Raid", 00:22:01.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.565 "strip_size_kb": 64, 00:22:01.565 "state": "configuring", 00:22:01.565 "raid_level": "concat", 00:22:01.565 "superblock": false, 00:22:01.565 "num_base_bdevs": 4, 00:22:01.565 "num_base_bdevs_discovered": 3, 00:22:01.565 "num_base_bdevs_operational": 4, 00:22:01.565 "base_bdevs_list": [ 00:22:01.565 { 00:22:01.565 "name": "BaseBdev1", 00:22:01.565 "uuid": "4edab420-5fc1-456f-8d18-3f8ce981da54", 00:22:01.565 "is_configured": true, 00:22:01.565 "data_offset": 0, 00:22:01.565 "data_size": 65536 00:22:01.565 }, 00:22:01.565 { 00:22:01.565 "name": null, 00:22:01.565 "uuid": "6a548df3-6350-4179-888c-8c99684b38ee", 00:22:01.565 "is_configured": false, 00:22:01.565 "data_offset": 0, 00:22:01.565 "data_size": 65536 00:22:01.565 }, 00:22:01.565 { 00:22:01.565 "name": "BaseBdev3", 00:22:01.565 "uuid": "466ed25e-ddce-46b3-aa6c-8c7c8c255b48", 00:22:01.565 "is_configured": true, 00:22:01.565 "data_offset": 0, 00:22:01.565 "data_size": 65536 00:22:01.565 }, 00:22:01.565 { 00:22:01.565 "name": "BaseBdev4", 00:22:01.565 "uuid": "2df4e6e0-de0b-4f07-bd0d-894a0a75525a", 00:22:01.565 "is_configured": true, 00:22:01.565 "data_offset": 0, 00:22:01.565 "data_size": 65536 00:22:01.565 } 00:22:01.565 ] 00:22:01.565 }' 00:22:01.565 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:01.565 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.824 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.824 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.825 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.825 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:01.825 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.085 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:22:02.085 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:22:02.085 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.085 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.085 [2024-12-05 12:53:44.423349] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:02.085 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.085 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:02.085 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:02.085 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:02.085 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:02.085 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:02.085 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:02.085 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:02.085 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:02.085 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:02.085 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:02.085 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.085 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:02.085 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.085 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.085 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.085 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:02.085 "name": "Existed_Raid", 00:22:02.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.085 "strip_size_kb": 64, 00:22:02.085 "state": "configuring", 00:22:02.085 "raid_level": "concat", 00:22:02.085 "superblock": false, 00:22:02.085 "num_base_bdevs": 4, 00:22:02.085 "num_base_bdevs_discovered": 2, 00:22:02.085 "num_base_bdevs_operational": 4, 00:22:02.085 "base_bdevs_list": [ 00:22:02.085 { 00:22:02.085 "name": "BaseBdev1", 00:22:02.085 "uuid": "4edab420-5fc1-456f-8d18-3f8ce981da54", 00:22:02.085 "is_configured": true, 00:22:02.085 "data_offset": 0, 00:22:02.085 "data_size": 65536 00:22:02.085 }, 00:22:02.085 { 00:22:02.085 "name": null, 00:22:02.085 "uuid": "6a548df3-6350-4179-888c-8c99684b38ee", 00:22:02.085 "is_configured": false, 00:22:02.085 "data_offset": 0, 00:22:02.085 "data_size": 65536 00:22:02.085 }, 00:22:02.085 { 00:22:02.085 "name": null, 00:22:02.085 "uuid": "466ed25e-ddce-46b3-aa6c-8c7c8c255b48", 00:22:02.085 "is_configured": false, 00:22:02.085 "data_offset": 0, 00:22:02.085 "data_size": 65536 00:22:02.085 }, 00:22:02.085 { 00:22:02.085 "name": "BaseBdev4", 00:22:02.085 "uuid": "2df4e6e0-de0b-4f07-bd0d-894a0a75525a", 00:22:02.085 "is_configured": true, 00:22:02.085 "data_offset": 0, 00:22:02.085 "data_size": 65536 00:22:02.085 } 00:22:02.085 ] 00:22:02.085 }' 00:22:02.085 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:02.085 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.372 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.372 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:02.372 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.372 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.372 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.372 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:22:02.372 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:02.372 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.372 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.372 [2024-12-05 12:53:44.771391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:02.372 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.372 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:02.372 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:02.372 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:02.372 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:02.372 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:02.372 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:02.372 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:02.372 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:02.372 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:02.372 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:02.372 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.372 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.372 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.372 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:02.372 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.372 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:02.372 "name": "Existed_Raid", 00:22:02.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.372 "strip_size_kb": 64, 00:22:02.372 "state": "configuring", 00:22:02.372 "raid_level": "concat", 00:22:02.372 "superblock": false, 00:22:02.372 "num_base_bdevs": 4, 00:22:02.372 "num_base_bdevs_discovered": 3, 00:22:02.372 "num_base_bdevs_operational": 4, 00:22:02.372 "base_bdevs_list": [ 00:22:02.372 { 00:22:02.372 "name": "BaseBdev1", 00:22:02.372 "uuid": "4edab420-5fc1-456f-8d18-3f8ce981da54", 00:22:02.372 "is_configured": true, 00:22:02.372 "data_offset": 0, 00:22:02.372 "data_size": 65536 00:22:02.372 }, 00:22:02.372 { 00:22:02.372 "name": null, 00:22:02.372 "uuid": "6a548df3-6350-4179-888c-8c99684b38ee", 00:22:02.372 "is_configured": false, 00:22:02.372 "data_offset": 0, 00:22:02.372 "data_size": 65536 00:22:02.372 }, 00:22:02.372 { 00:22:02.372 "name": "BaseBdev3", 00:22:02.372 "uuid": "466ed25e-ddce-46b3-aa6c-8c7c8c255b48", 00:22:02.372 "is_configured": true, 00:22:02.372 "data_offset": 0, 00:22:02.372 "data_size": 65536 00:22:02.372 }, 00:22:02.372 { 00:22:02.372 "name": "BaseBdev4", 00:22:02.372 "uuid": "2df4e6e0-de0b-4f07-bd0d-894a0a75525a", 00:22:02.372 "is_configured": true, 00:22:02.372 "data_offset": 0, 00:22:02.372 "data_size": 65536 00:22:02.372 } 00:22:02.372 ] 00:22:02.372 }' 00:22:02.372 12:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:02.372 12:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.635 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.635 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.635 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.635 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:02.635 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.635 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:22:02.635 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:02.635 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.635 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.635 [2024-12-05 12:53:45.115479] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:02.635 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.635 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:02.635 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:02.635 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:02.635 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:02.635 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:02.635 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:02.635 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:02.635 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:02.635 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:02.635 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:02.635 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.635 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.635 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.635 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:02.635 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.635 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:02.635 "name": "Existed_Raid", 00:22:02.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.635 "strip_size_kb": 64, 00:22:02.635 "state": "configuring", 00:22:02.635 "raid_level": "concat", 00:22:02.635 "superblock": false, 00:22:02.635 "num_base_bdevs": 4, 00:22:02.635 "num_base_bdevs_discovered": 2, 00:22:02.635 "num_base_bdevs_operational": 4, 00:22:02.635 "base_bdevs_list": [ 00:22:02.635 { 00:22:02.635 "name": null, 00:22:02.635 "uuid": "4edab420-5fc1-456f-8d18-3f8ce981da54", 00:22:02.635 "is_configured": false, 00:22:02.635 "data_offset": 0, 00:22:02.635 "data_size": 65536 00:22:02.635 }, 00:22:02.635 { 00:22:02.635 "name": null, 00:22:02.635 "uuid": "6a548df3-6350-4179-888c-8c99684b38ee", 00:22:02.635 "is_configured": false, 00:22:02.635 "data_offset": 0, 00:22:02.635 "data_size": 65536 00:22:02.635 }, 00:22:02.635 { 00:22:02.635 "name": "BaseBdev3", 00:22:02.635 "uuid": "466ed25e-ddce-46b3-aa6c-8c7c8c255b48", 00:22:02.635 "is_configured": true, 00:22:02.635 "data_offset": 0, 00:22:02.635 "data_size": 65536 00:22:02.635 }, 00:22:02.635 { 00:22:02.635 "name": "BaseBdev4", 00:22:02.635 "uuid": "2df4e6e0-de0b-4f07-bd0d-894a0a75525a", 00:22:02.635 "is_configured": true, 00:22:02.635 "data_offset": 0, 00:22:02.635 "data_size": 65536 00:22:02.635 } 00:22:02.635 ] 00:22:02.635 }' 00:22:02.635 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:02.635 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.207 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:03.207 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.207 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.207 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.207 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.207 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:22:03.207 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:03.207 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.207 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.207 [2024-12-05 12:53:45.517942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:03.207 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.207 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:03.207 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:03.207 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:03.207 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:03.207 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:03.207 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:03.207 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:03.207 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:03.207 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:03.207 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:03.207 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.207 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:03.207 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.207 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.208 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.208 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:03.208 "name": "Existed_Raid", 00:22:03.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.208 "strip_size_kb": 64, 00:22:03.208 "state": "configuring", 00:22:03.208 "raid_level": "concat", 00:22:03.208 "superblock": false, 00:22:03.208 "num_base_bdevs": 4, 00:22:03.208 "num_base_bdevs_discovered": 3, 00:22:03.208 "num_base_bdevs_operational": 4, 00:22:03.208 "base_bdevs_list": [ 00:22:03.208 { 00:22:03.208 "name": null, 00:22:03.208 "uuid": "4edab420-5fc1-456f-8d18-3f8ce981da54", 00:22:03.208 "is_configured": false, 00:22:03.208 "data_offset": 0, 00:22:03.208 "data_size": 65536 00:22:03.208 }, 00:22:03.208 { 00:22:03.208 "name": "BaseBdev2", 00:22:03.208 "uuid": "6a548df3-6350-4179-888c-8c99684b38ee", 00:22:03.208 "is_configured": true, 00:22:03.208 "data_offset": 0, 00:22:03.208 "data_size": 65536 00:22:03.208 }, 00:22:03.208 { 00:22:03.208 "name": "BaseBdev3", 00:22:03.208 "uuid": "466ed25e-ddce-46b3-aa6c-8c7c8c255b48", 00:22:03.208 "is_configured": true, 00:22:03.208 "data_offset": 0, 00:22:03.208 "data_size": 65536 00:22:03.208 }, 00:22:03.208 { 00:22:03.208 "name": "BaseBdev4", 00:22:03.208 "uuid": "2df4e6e0-de0b-4f07-bd0d-894a0a75525a", 00:22:03.208 "is_configured": true, 00:22:03.208 "data_offset": 0, 00:22:03.208 "data_size": 65536 00:22:03.208 } 00:22:03.208 ] 00:22:03.208 }' 00:22:03.208 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:03.208 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.467 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.467 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.467 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4edab420-5fc1-456f-8d18-3f8ce981da54 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.468 [2024-12-05 12:53:45.944538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:03.468 [2024-12-05 12:53:45.944571] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:03.468 [2024-12-05 12:53:45.944576] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:22:03.468 [2024-12-05 12:53:45.944781] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:22:03.468 [2024-12-05 12:53:45.944880] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:03.468 [2024-12-05 12:53:45.944888] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:22:03.468 [2024-12-05 12:53:45.945053] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:03.468 NewBaseBdev 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.468 [ 00:22:03.468 { 00:22:03.468 "name": "NewBaseBdev", 00:22:03.468 "aliases": [ 00:22:03.468 "4edab420-5fc1-456f-8d18-3f8ce981da54" 00:22:03.468 ], 00:22:03.468 "product_name": "Malloc disk", 00:22:03.468 "block_size": 512, 00:22:03.468 "num_blocks": 65536, 00:22:03.468 "uuid": "4edab420-5fc1-456f-8d18-3f8ce981da54", 00:22:03.468 "assigned_rate_limits": { 00:22:03.468 "rw_ios_per_sec": 0, 00:22:03.468 "rw_mbytes_per_sec": 0, 00:22:03.468 "r_mbytes_per_sec": 0, 00:22:03.468 "w_mbytes_per_sec": 0 00:22:03.468 }, 00:22:03.468 "claimed": true, 00:22:03.468 "claim_type": "exclusive_write", 00:22:03.468 "zoned": false, 00:22:03.468 "supported_io_types": { 00:22:03.468 "read": true, 00:22:03.468 "write": true, 00:22:03.468 "unmap": true, 00:22:03.468 "flush": true, 00:22:03.468 "reset": true, 00:22:03.468 "nvme_admin": false, 00:22:03.468 "nvme_io": false, 00:22:03.468 "nvme_io_md": false, 00:22:03.468 "write_zeroes": true, 00:22:03.468 "zcopy": true, 00:22:03.468 "get_zone_info": false, 00:22:03.468 "zone_management": false, 00:22:03.468 "zone_append": false, 00:22:03.468 "compare": false, 00:22:03.468 "compare_and_write": false, 00:22:03.468 "abort": true, 00:22:03.468 "seek_hole": false, 00:22:03.468 "seek_data": false, 00:22:03.468 "copy": true, 00:22:03.468 "nvme_iov_md": false 00:22:03.468 }, 00:22:03.468 "memory_domains": [ 00:22:03.468 { 00:22:03.468 "dma_device_id": "system", 00:22:03.468 "dma_device_type": 1 00:22:03.468 }, 00:22:03.468 { 00:22:03.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:03.468 "dma_device_type": 2 00:22:03.468 } 00:22:03.468 ], 00:22:03.468 "driver_specific": {} 00:22:03.468 } 00:22:03.468 ] 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:03.468 "name": "Existed_Raid", 00:22:03.468 "uuid": "05e6676e-37da-4da6-9711-855857135a69", 00:22:03.468 "strip_size_kb": 64, 00:22:03.468 "state": "online", 00:22:03.468 "raid_level": "concat", 00:22:03.468 "superblock": false, 00:22:03.468 "num_base_bdevs": 4, 00:22:03.468 "num_base_bdevs_discovered": 4, 00:22:03.468 "num_base_bdevs_operational": 4, 00:22:03.468 "base_bdevs_list": [ 00:22:03.468 { 00:22:03.468 "name": "NewBaseBdev", 00:22:03.468 "uuid": "4edab420-5fc1-456f-8d18-3f8ce981da54", 00:22:03.468 "is_configured": true, 00:22:03.468 "data_offset": 0, 00:22:03.468 "data_size": 65536 00:22:03.468 }, 00:22:03.468 { 00:22:03.468 "name": "BaseBdev2", 00:22:03.468 "uuid": "6a548df3-6350-4179-888c-8c99684b38ee", 00:22:03.468 "is_configured": true, 00:22:03.468 "data_offset": 0, 00:22:03.468 "data_size": 65536 00:22:03.468 }, 00:22:03.468 { 00:22:03.468 "name": "BaseBdev3", 00:22:03.468 "uuid": "466ed25e-ddce-46b3-aa6c-8c7c8c255b48", 00:22:03.468 "is_configured": true, 00:22:03.468 "data_offset": 0, 00:22:03.468 "data_size": 65536 00:22:03.468 }, 00:22:03.468 { 00:22:03.468 "name": "BaseBdev4", 00:22:03.468 "uuid": "2df4e6e0-de0b-4f07-bd0d-894a0a75525a", 00:22:03.468 "is_configured": true, 00:22:03.468 "data_offset": 0, 00:22:03.468 "data_size": 65536 00:22:03.468 } 00:22:03.468 ] 00:22:03.468 }' 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:03.468 12:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.728 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:22:03.728 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:03.728 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:03.728 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:03.728 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:03.728 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:03.728 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:03.728 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:03.728 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.728 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.728 [2024-12-05 12:53:46.292928] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:03.728 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.989 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:03.989 "name": "Existed_Raid", 00:22:03.989 "aliases": [ 00:22:03.989 "05e6676e-37da-4da6-9711-855857135a69" 00:22:03.989 ], 00:22:03.989 "product_name": "Raid Volume", 00:22:03.989 "block_size": 512, 00:22:03.989 "num_blocks": 262144, 00:22:03.989 "uuid": "05e6676e-37da-4da6-9711-855857135a69", 00:22:03.989 "assigned_rate_limits": { 00:22:03.989 "rw_ios_per_sec": 0, 00:22:03.989 "rw_mbytes_per_sec": 0, 00:22:03.989 "r_mbytes_per_sec": 0, 00:22:03.989 "w_mbytes_per_sec": 0 00:22:03.989 }, 00:22:03.989 "claimed": false, 00:22:03.989 "zoned": false, 00:22:03.989 "supported_io_types": { 00:22:03.989 "read": true, 00:22:03.989 "write": true, 00:22:03.989 "unmap": true, 00:22:03.989 "flush": true, 00:22:03.989 "reset": true, 00:22:03.989 "nvme_admin": false, 00:22:03.989 "nvme_io": false, 00:22:03.989 "nvme_io_md": false, 00:22:03.989 "write_zeroes": true, 00:22:03.989 "zcopy": false, 00:22:03.989 "get_zone_info": false, 00:22:03.989 "zone_management": false, 00:22:03.989 "zone_append": false, 00:22:03.989 "compare": false, 00:22:03.989 "compare_and_write": false, 00:22:03.989 "abort": false, 00:22:03.989 "seek_hole": false, 00:22:03.989 "seek_data": false, 00:22:03.989 "copy": false, 00:22:03.989 "nvme_iov_md": false 00:22:03.989 }, 00:22:03.989 "memory_domains": [ 00:22:03.989 { 00:22:03.989 "dma_device_id": "system", 00:22:03.989 "dma_device_type": 1 00:22:03.989 }, 00:22:03.989 { 00:22:03.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:03.989 "dma_device_type": 2 00:22:03.989 }, 00:22:03.989 { 00:22:03.989 "dma_device_id": "system", 00:22:03.989 "dma_device_type": 1 00:22:03.989 }, 00:22:03.989 { 00:22:03.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:03.989 "dma_device_type": 2 00:22:03.989 }, 00:22:03.989 { 00:22:03.989 "dma_device_id": "system", 00:22:03.989 "dma_device_type": 1 00:22:03.989 }, 00:22:03.989 { 00:22:03.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:03.989 "dma_device_type": 2 00:22:03.989 }, 00:22:03.989 { 00:22:03.989 "dma_device_id": "system", 00:22:03.989 "dma_device_type": 1 00:22:03.989 }, 00:22:03.989 { 00:22:03.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:03.989 "dma_device_type": 2 00:22:03.989 } 00:22:03.989 ], 00:22:03.989 "driver_specific": { 00:22:03.989 "raid": { 00:22:03.989 "uuid": "05e6676e-37da-4da6-9711-855857135a69", 00:22:03.989 "strip_size_kb": 64, 00:22:03.989 "state": "online", 00:22:03.989 "raid_level": "concat", 00:22:03.989 "superblock": false, 00:22:03.989 "num_base_bdevs": 4, 00:22:03.989 "num_base_bdevs_discovered": 4, 00:22:03.989 "num_base_bdevs_operational": 4, 00:22:03.989 "base_bdevs_list": [ 00:22:03.989 { 00:22:03.989 "name": "NewBaseBdev", 00:22:03.989 "uuid": "4edab420-5fc1-456f-8d18-3f8ce981da54", 00:22:03.989 "is_configured": true, 00:22:03.989 "data_offset": 0, 00:22:03.989 "data_size": 65536 00:22:03.989 }, 00:22:03.989 { 00:22:03.989 "name": "BaseBdev2", 00:22:03.989 "uuid": "6a548df3-6350-4179-888c-8c99684b38ee", 00:22:03.989 "is_configured": true, 00:22:03.989 "data_offset": 0, 00:22:03.989 "data_size": 65536 00:22:03.989 }, 00:22:03.989 { 00:22:03.989 "name": "BaseBdev3", 00:22:03.989 "uuid": "466ed25e-ddce-46b3-aa6c-8c7c8c255b48", 00:22:03.989 "is_configured": true, 00:22:03.989 "data_offset": 0, 00:22:03.989 "data_size": 65536 00:22:03.989 }, 00:22:03.989 { 00:22:03.989 "name": "BaseBdev4", 00:22:03.989 "uuid": "2df4e6e0-de0b-4f07-bd0d-894a0a75525a", 00:22:03.989 "is_configured": true, 00:22:03.989 "data_offset": 0, 00:22:03.989 "data_size": 65536 00:22:03.989 } 00:22:03.989 ] 00:22:03.989 } 00:22:03.989 } 00:22:03.989 }' 00:22:03.989 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:03.989 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:22:03.989 BaseBdev2 00:22:03.989 BaseBdev3 00:22:03.989 BaseBdev4' 00:22:03.989 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:03.989 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:03.989 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:03.989 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:22:03.989 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:03.989 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.989 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.989 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.989 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:03.989 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:03.989 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:03.989 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:03.989 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.989 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.989 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:03.989 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.989 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:03.989 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:03.989 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:03.990 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:03.990 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.990 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.990 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:03.990 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.990 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:03.990 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:03.990 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:03.990 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:22:03.990 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.990 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.990 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:03.990 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.990 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:03.990 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:03.990 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:03.990 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.990 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.990 [2024-12-05 12:53:46.520660] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:03.990 [2024-12-05 12:53:46.520683] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:03.990 [2024-12-05 12:53:46.520738] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:03.990 [2024-12-05 12:53:46.520794] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:03.990 [2024-12-05 12:53:46.520802] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:22:03.990 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.990 12:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69299 00:22:03.990 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69299 ']' 00:22:03.990 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69299 00:22:03.990 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:22:03.990 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:03.990 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69299 00:22:03.990 killing process with pid 69299 00:22:03.990 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:03.990 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:03.990 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69299' 00:22:03.990 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69299 00:22:03.990 [2024-12-05 12:53:46.554234] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:03.990 12:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69299 00:22:04.248 [2024-12-05 12:53:46.749074] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:04.843 12:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:22:04.843 00:22:04.843 real 0m8.105s 00:22:04.843 user 0m13.018s 00:22:04.843 sys 0m1.368s 00:22:04.843 ************************************ 00:22:04.843 END TEST raid_state_function_test 00:22:04.843 ************************************ 00:22:04.843 12:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:04.843 12:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.843 12:53:47 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:22:04.843 12:53:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:04.844 12:53:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:04.844 12:53:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:04.844 ************************************ 00:22:04.844 START TEST raid_state_function_test_sb 00:22:04.844 ************************************ 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:22:04.844 Process raid pid: 69932 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=69932 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69932' 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 69932 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 69932 ']' 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:04.844 12:53:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:05.105 [2024-12-05 12:53:47.450721] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:22:05.105 [2024-12-05 12:53:47.450836] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.105 [2024-12-05 12:53:47.606164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.366 [2024-12-05 12:53:47.691904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.366 [2024-12-05 12:53:47.803826] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:05.366 [2024-12-05 12:53:47.803856] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:05.936 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:05.936 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:22:05.936 12:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:05.936 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.936 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.936 [2024-12-05 12:53:48.300347] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:05.936 [2024-12-05 12:53:48.300395] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:05.936 [2024-12-05 12:53:48.300403] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:05.936 [2024-12-05 12:53:48.300411] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:05.936 [2024-12-05 12:53:48.300416] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:05.936 [2024-12-05 12:53:48.300423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:05.936 [2024-12-05 12:53:48.300428] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:05.936 [2024-12-05 12:53:48.300434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:05.936 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.936 12:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:05.936 12:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:05.936 12:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:05.936 12:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:05.936 12:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:05.936 12:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:05.936 12:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:05.936 12:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:05.936 12:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:05.936 12:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:05.936 12:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:05.936 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.936 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.936 12:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:05.936 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.936 12:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:05.936 "name": "Existed_Raid", 00:22:05.936 "uuid": "79620e7d-35cd-4b51-b021-d83ce552c0f6", 00:22:05.936 "strip_size_kb": 64, 00:22:05.936 "state": "configuring", 00:22:05.936 "raid_level": "concat", 00:22:05.936 "superblock": true, 00:22:05.936 "num_base_bdevs": 4, 00:22:05.936 "num_base_bdevs_discovered": 0, 00:22:05.936 "num_base_bdevs_operational": 4, 00:22:05.936 "base_bdevs_list": [ 00:22:05.936 { 00:22:05.936 "name": "BaseBdev1", 00:22:05.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.936 "is_configured": false, 00:22:05.936 "data_offset": 0, 00:22:05.936 "data_size": 0 00:22:05.936 }, 00:22:05.936 { 00:22:05.936 "name": "BaseBdev2", 00:22:05.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.936 "is_configured": false, 00:22:05.936 "data_offset": 0, 00:22:05.936 "data_size": 0 00:22:05.936 }, 00:22:05.936 { 00:22:05.936 "name": "BaseBdev3", 00:22:05.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.936 "is_configured": false, 00:22:05.936 "data_offset": 0, 00:22:05.936 "data_size": 0 00:22:05.936 }, 00:22:05.936 { 00:22:05.936 "name": "BaseBdev4", 00:22:05.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.936 "is_configured": false, 00:22:05.936 "data_offset": 0, 00:22:05.936 "data_size": 0 00:22:05.936 } 00:22:05.936 ] 00:22:05.936 }' 00:22:05.936 12:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:05.936 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.198 [2024-12-05 12:53:48.616353] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:06.198 [2024-12-05 12:53:48.616385] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.198 [2024-12-05 12:53:48.624369] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:06.198 [2024-12-05 12:53:48.624403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:06.198 [2024-12-05 12:53:48.624410] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:06.198 [2024-12-05 12:53:48.624418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:06.198 [2024-12-05 12:53:48.624423] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:06.198 [2024-12-05 12:53:48.624430] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:06.198 [2024-12-05 12:53:48.624434] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:06.198 [2024-12-05 12:53:48.624441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.198 [2024-12-05 12:53:48.652521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:06.198 BaseBdev1 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.198 [ 00:22:06.198 { 00:22:06.198 "name": "BaseBdev1", 00:22:06.198 "aliases": [ 00:22:06.198 "a1723749-cce0-4b39-9d2d-b4d996923bce" 00:22:06.198 ], 00:22:06.198 "product_name": "Malloc disk", 00:22:06.198 "block_size": 512, 00:22:06.198 "num_blocks": 65536, 00:22:06.198 "uuid": "a1723749-cce0-4b39-9d2d-b4d996923bce", 00:22:06.198 "assigned_rate_limits": { 00:22:06.198 "rw_ios_per_sec": 0, 00:22:06.198 "rw_mbytes_per_sec": 0, 00:22:06.198 "r_mbytes_per_sec": 0, 00:22:06.198 "w_mbytes_per_sec": 0 00:22:06.198 }, 00:22:06.198 "claimed": true, 00:22:06.198 "claim_type": "exclusive_write", 00:22:06.198 "zoned": false, 00:22:06.198 "supported_io_types": { 00:22:06.198 "read": true, 00:22:06.198 "write": true, 00:22:06.198 "unmap": true, 00:22:06.198 "flush": true, 00:22:06.198 "reset": true, 00:22:06.198 "nvme_admin": false, 00:22:06.198 "nvme_io": false, 00:22:06.198 "nvme_io_md": false, 00:22:06.198 "write_zeroes": true, 00:22:06.198 "zcopy": true, 00:22:06.198 "get_zone_info": false, 00:22:06.198 "zone_management": false, 00:22:06.198 "zone_append": false, 00:22:06.198 "compare": false, 00:22:06.198 "compare_and_write": false, 00:22:06.198 "abort": true, 00:22:06.198 "seek_hole": false, 00:22:06.198 "seek_data": false, 00:22:06.198 "copy": true, 00:22:06.198 "nvme_iov_md": false 00:22:06.198 }, 00:22:06.198 "memory_domains": [ 00:22:06.198 { 00:22:06.198 "dma_device_id": "system", 00:22:06.198 "dma_device_type": 1 00:22:06.198 }, 00:22:06.198 { 00:22:06.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:06.198 "dma_device_type": 2 00:22:06.198 } 00:22:06.198 ], 00:22:06.198 "driver_specific": {} 00:22:06.198 } 00:22:06.198 ] 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:06.198 "name": "Existed_Raid", 00:22:06.198 "uuid": "462f061e-7b78-4661-9b05-ce22391c1300", 00:22:06.198 "strip_size_kb": 64, 00:22:06.198 "state": "configuring", 00:22:06.198 "raid_level": "concat", 00:22:06.198 "superblock": true, 00:22:06.198 "num_base_bdevs": 4, 00:22:06.198 "num_base_bdevs_discovered": 1, 00:22:06.198 "num_base_bdevs_operational": 4, 00:22:06.198 "base_bdevs_list": [ 00:22:06.198 { 00:22:06.198 "name": "BaseBdev1", 00:22:06.198 "uuid": "a1723749-cce0-4b39-9d2d-b4d996923bce", 00:22:06.198 "is_configured": true, 00:22:06.198 "data_offset": 2048, 00:22:06.198 "data_size": 63488 00:22:06.198 }, 00:22:06.198 { 00:22:06.198 "name": "BaseBdev2", 00:22:06.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.198 "is_configured": false, 00:22:06.198 "data_offset": 0, 00:22:06.198 "data_size": 0 00:22:06.198 }, 00:22:06.198 { 00:22:06.198 "name": "BaseBdev3", 00:22:06.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.198 "is_configured": false, 00:22:06.198 "data_offset": 0, 00:22:06.198 "data_size": 0 00:22:06.198 }, 00:22:06.198 { 00:22:06.198 "name": "BaseBdev4", 00:22:06.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.198 "is_configured": false, 00:22:06.198 "data_offset": 0, 00:22:06.198 "data_size": 0 00:22:06.198 } 00:22:06.198 ] 00:22:06.198 }' 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:06.198 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.460 12:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:06.460 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.460 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.460 [2024-12-05 12:53:48.988615] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:06.460 [2024-12-05 12:53:48.988746] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:06.460 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.460 12:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:06.460 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.460 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.460 [2024-12-05 12:53:48.996668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:06.460 [2024-12-05 12:53:48.998307] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:06.460 [2024-12-05 12:53:48.998405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:06.460 [2024-12-05 12:53:48.998451] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:06.460 [2024-12-05 12:53:48.998475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:06.460 [2024-12-05 12:53:48.998604] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:06.460 [2024-12-05 12:53:48.998627] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:06.460 12:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.460 12:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:06.460 12:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:06.460 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:06.460 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:06.460 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:06.460 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:06.460 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:06.460 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:06.460 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:06.460 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:06.460 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:06.460 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:06.460 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:06.460 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.460 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:06.460 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.460 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.460 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:06.460 "name": "Existed_Raid", 00:22:06.460 "uuid": "09e41458-b3f3-4d4e-8b2a-2adb73a308f7", 00:22:06.460 "strip_size_kb": 64, 00:22:06.460 "state": "configuring", 00:22:06.460 "raid_level": "concat", 00:22:06.460 "superblock": true, 00:22:06.460 "num_base_bdevs": 4, 00:22:06.460 "num_base_bdevs_discovered": 1, 00:22:06.460 "num_base_bdevs_operational": 4, 00:22:06.460 "base_bdevs_list": [ 00:22:06.460 { 00:22:06.460 "name": "BaseBdev1", 00:22:06.460 "uuid": "a1723749-cce0-4b39-9d2d-b4d996923bce", 00:22:06.460 "is_configured": true, 00:22:06.460 "data_offset": 2048, 00:22:06.460 "data_size": 63488 00:22:06.460 }, 00:22:06.460 { 00:22:06.460 "name": "BaseBdev2", 00:22:06.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.460 "is_configured": false, 00:22:06.460 "data_offset": 0, 00:22:06.460 "data_size": 0 00:22:06.460 }, 00:22:06.460 { 00:22:06.460 "name": "BaseBdev3", 00:22:06.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.460 "is_configured": false, 00:22:06.460 "data_offset": 0, 00:22:06.460 "data_size": 0 00:22:06.460 }, 00:22:06.460 { 00:22:06.460 "name": "BaseBdev4", 00:22:06.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.460 "is_configured": false, 00:22:06.460 "data_offset": 0, 00:22:06.460 "data_size": 0 00:22:06.460 } 00:22:06.460 ] 00:22:06.460 }' 00:22:06.460 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:06.460 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.740 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:06.740 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.740 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.002 [2024-12-05 12:53:49.335303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:07.002 BaseBdev2 00:22:07.002 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.002 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:07.002 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:07.002 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:07.002 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:07.002 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:07.002 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:07.002 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:07.002 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.002 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.002 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.002 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:07.002 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.002 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.002 [ 00:22:07.002 { 00:22:07.002 "name": "BaseBdev2", 00:22:07.002 "aliases": [ 00:22:07.002 "1a9c60ff-f95e-43fd-b4bb-8c353b791006" 00:22:07.002 ], 00:22:07.002 "product_name": "Malloc disk", 00:22:07.002 "block_size": 512, 00:22:07.002 "num_blocks": 65536, 00:22:07.002 "uuid": "1a9c60ff-f95e-43fd-b4bb-8c353b791006", 00:22:07.002 "assigned_rate_limits": { 00:22:07.002 "rw_ios_per_sec": 0, 00:22:07.002 "rw_mbytes_per_sec": 0, 00:22:07.002 "r_mbytes_per_sec": 0, 00:22:07.002 "w_mbytes_per_sec": 0 00:22:07.002 }, 00:22:07.002 "claimed": true, 00:22:07.002 "claim_type": "exclusive_write", 00:22:07.002 "zoned": false, 00:22:07.002 "supported_io_types": { 00:22:07.002 "read": true, 00:22:07.002 "write": true, 00:22:07.002 "unmap": true, 00:22:07.002 "flush": true, 00:22:07.002 "reset": true, 00:22:07.002 "nvme_admin": false, 00:22:07.002 "nvme_io": false, 00:22:07.002 "nvme_io_md": false, 00:22:07.002 "write_zeroes": true, 00:22:07.002 "zcopy": true, 00:22:07.002 "get_zone_info": false, 00:22:07.002 "zone_management": false, 00:22:07.002 "zone_append": false, 00:22:07.002 "compare": false, 00:22:07.002 "compare_and_write": false, 00:22:07.002 "abort": true, 00:22:07.002 "seek_hole": false, 00:22:07.002 "seek_data": false, 00:22:07.002 "copy": true, 00:22:07.002 "nvme_iov_md": false 00:22:07.002 }, 00:22:07.002 "memory_domains": [ 00:22:07.002 { 00:22:07.002 "dma_device_id": "system", 00:22:07.002 "dma_device_type": 1 00:22:07.002 }, 00:22:07.002 { 00:22:07.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:07.002 "dma_device_type": 2 00:22:07.002 } 00:22:07.002 ], 00:22:07.002 "driver_specific": {} 00:22:07.002 } 00:22:07.002 ] 00:22:07.002 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.002 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:07.002 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:07.002 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:07.002 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:07.002 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:07.002 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:07.002 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:07.002 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:07.002 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:07.002 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:07.002 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:07.002 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:07.002 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:07.002 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:07.002 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:07.002 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.002 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.002 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.002 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:07.002 "name": "Existed_Raid", 00:22:07.002 "uuid": "09e41458-b3f3-4d4e-8b2a-2adb73a308f7", 00:22:07.002 "strip_size_kb": 64, 00:22:07.002 "state": "configuring", 00:22:07.002 "raid_level": "concat", 00:22:07.002 "superblock": true, 00:22:07.002 "num_base_bdevs": 4, 00:22:07.002 "num_base_bdevs_discovered": 2, 00:22:07.002 "num_base_bdevs_operational": 4, 00:22:07.002 "base_bdevs_list": [ 00:22:07.002 { 00:22:07.002 "name": "BaseBdev1", 00:22:07.002 "uuid": "a1723749-cce0-4b39-9d2d-b4d996923bce", 00:22:07.002 "is_configured": true, 00:22:07.002 "data_offset": 2048, 00:22:07.002 "data_size": 63488 00:22:07.002 }, 00:22:07.002 { 00:22:07.002 "name": "BaseBdev2", 00:22:07.002 "uuid": "1a9c60ff-f95e-43fd-b4bb-8c353b791006", 00:22:07.002 "is_configured": true, 00:22:07.002 "data_offset": 2048, 00:22:07.002 "data_size": 63488 00:22:07.002 }, 00:22:07.002 { 00:22:07.002 "name": "BaseBdev3", 00:22:07.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.002 "is_configured": false, 00:22:07.002 "data_offset": 0, 00:22:07.002 "data_size": 0 00:22:07.002 }, 00:22:07.002 { 00:22:07.002 "name": "BaseBdev4", 00:22:07.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.002 "is_configured": false, 00:22:07.002 "data_offset": 0, 00:22:07.002 "data_size": 0 00:22:07.002 } 00:22:07.002 ] 00:22:07.002 }' 00:22:07.002 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:07.002 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.263 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:07.263 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.263 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.263 [2024-12-05 12:53:49.731857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:07.263 BaseBdev3 00:22:07.263 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.263 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:22:07.263 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:22:07.263 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:07.263 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:07.263 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:07.263 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:07.263 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:07.263 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.263 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.263 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.263 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:07.263 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.263 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.263 [ 00:22:07.263 { 00:22:07.263 "name": "BaseBdev3", 00:22:07.263 "aliases": [ 00:22:07.263 "d338cc6b-ad78-4df6-9dfb-80f726234422" 00:22:07.263 ], 00:22:07.263 "product_name": "Malloc disk", 00:22:07.263 "block_size": 512, 00:22:07.263 "num_blocks": 65536, 00:22:07.263 "uuid": "d338cc6b-ad78-4df6-9dfb-80f726234422", 00:22:07.263 "assigned_rate_limits": { 00:22:07.263 "rw_ios_per_sec": 0, 00:22:07.263 "rw_mbytes_per_sec": 0, 00:22:07.263 "r_mbytes_per_sec": 0, 00:22:07.264 "w_mbytes_per_sec": 0 00:22:07.264 }, 00:22:07.264 "claimed": true, 00:22:07.264 "claim_type": "exclusive_write", 00:22:07.264 "zoned": false, 00:22:07.264 "supported_io_types": { 00:22:07.264 "read": true, 00:22:07.264 "write": true, 00:22:07.264 "unmap": true, 00:22:07.264 "flush": true, 00:22:07.264 "reset": true, 00:22:07.264 "nvme_admin": false, 00:22:07.264 "nvme_io": false, 00:22:07.264 "nvme_io_md": false, 00:22:07.264 "write_zeroes": true, 00:22:07.264 "zcopy": true, 00:22:07.264 "get_zone_info": false, 00:22:07.264 "zone_management": false, 00:22:07.264 "zone_append": false, 00:22:07.264 "compare": false, 00:22:07.264 "compare_and_write": false, 00:22:07.264 "abort": true, 00:22:07.264 "seek_hole": false, 00:22:07.264 "seek_data": false, 00:22:07.264 "copy": true, 00:22:07.264 "nvme_iov_md": false 00:22:07.264 }, 00:22:07.264 "memory_domains": [ 00:22:07.264 { 00:22:07.264 "dma_device_id": "system", 00:22:07.264 "dma_device_type": 1 00:22:07.264 }, 00:22:07.264 { 00:22:07.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:07.264 "dma_device_type": 2 00:22:07.264 } 00:22:07.264 ], 00:22:07.264 "driver_specific": {} 00:22:07.264 } 00:22:07.264 ] 00:22:07.264 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.264 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:07.264 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:07.264 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:07.264 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:07.264 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:07.264 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:07.264 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:07.264 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:07.264 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:07.264 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:07.264 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:07.264 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:07.264 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:07.264 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:07.264 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:07.264 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.264 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.264 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.264 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:07.264 "name": "Existed_Raid", 00:22:07.264 "uuid": "09e41458-b3f3-4d4e-8b2a-2adb73a308f7", 00:22:07.264 "strip_size_kb": 64, 00:22:07.264 "state": "configuring", 00:22:07.264 "raid_level": "concat", 00:22:07.264 "superblock": true, 00:22:07.264 "num_base_bdevs": 4, 00:22:07.264 "num_base_bdevs_discovered": 3, 00:22:07.264 "num_base_bdevs_operational": 4, 00:22:07.264 "base_bdevs_list": [ 00:22:07.264 { 00:22:07.264 "name": "BaseBdev1", 00:22:07.264 "uuid": "a1723749-cce0-4b39-9d2d-b4d996923bce", 00:22:07.264 "is_configured": true, 00:22:07.264 "data_offset": 2048, 00:22:07.264 "data_size": 63488 00:22:07.264 }, 00:22:07.264 { 00:22:07.264 "name": "BaseBdev2", 00:22:07.264 "uuid": "1a9c60ff-f95e-43fd-b4bb-8c353b791006", 00:22:07.264 "is_configured": true, 00:22:07.264 "data_offset": 2048, 00:22:07.264 "data_size": 63488 00:22:07.264 }, 00:22:07.264 { 00:22:07.264 "name": "BaseBdev3", 00:22:07.264 "uuid": "d338cc6b-ad78-4df6-9dfb-80f726234422", 00:22:07.264 "is_configured": true, 00:22:07.264 "data_offset": 2048, 00:22:07.264 "data_size": 63488 00:22:07.264 }, 00:22:07.264 { 00:22:07.264 "name": "BaseBdev4", 00:22:07.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.264 "is_configured": false, 00:22:07.264 "data_offset": 0, 00:22:07.264 "data_size": 0 00:22:07.264 } 00:22:07.264 ] 00:22:07.264 }' 00:22:07.264 12:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:07.264 12:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.523 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:22:07.523 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.523 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.523 [2024-12-05 12:53:50.090800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:07.523 [2024-12-05 12:53:50.091012] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:07.523 [2024-12-05 12:53:50.091024] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:07.523 [2024-12-05 12:53:50.091246] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:07.523 BaseBdev4 00:22:07.523 [2024-12-05 12:53:50.091358] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:07.523 [2024-12-05 12:53:50.091368] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:07.523 [2024-12-05 12:53:50.091472] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:07.523 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.523 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:22:07.523 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:22:07.523 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:07.523 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:07.523 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:07.523 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:07.523 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:07.523 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.523 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.523 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.523 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:07.523 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.523 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.785 [ 00:22:07.785 { 00:22:07.785 "name": "BaseBdev4", 00:22:07.785 "aliases": [ 00:22:07.785 "37d0e30a-3f77-4489-bac8-782478547808" 00:22:07.785 ], 00:22:07.785 "product_name": "Malloc disk", 00:22:07.785 "block_size": 512, 00:22:07.785 "num_blocks": 65536, 00:22:07.785 "uuid": "37d0e30a-3f77-4489-bac8-782478547808", 00:22:07.785 "assigned_rate_limits": { 00:22:07.785 "rw_ios_per_sec": 0, 00:22:07.785 "rw_mbytes_per_sec": 0, 00:22:07.785 "r_mbytes_per_sec": 0, 00:22:07.785 "w_mbytes_per_sec": 0 00:22:07.785 }, 00:22:07.785 "claimed": true, 00:22:07.785 "claim_type": "exclusive_write", 00:22:07.785 "zoned": false, 00:22:07.785 "supported_io_types": { 00:22:07.785 "read": true, 00:22:07.785 "write": true, 00:22:07.785 "unmap": true, 00:22:07.785 "flush": true, 00:22:07.785 "reset": true, 00:22:07.785 "nvme_admin": false, 00:22:07.785 "nvme_io": false, 00:22:07.785 "nvme_io_md": false, 00:22:07.785 "write_zeroes": true, 00:22:07.785 "zcopy": true, 00:22:07.785 "get_zone_info": false, 00:22:07.785 "zone_management": false, 00:22:07.785 "zone_append": false, 00:22:07.785 "compare": false, 00:22:07.785 "compare_and_write": false, 00:22:07.785 "abort": true, 00:22:07.785 "seek_hole": false, 00:22:07.785 "seek_data": false, 00:22:07.785 "copy": true, 00:22:07.785 "nvme_iov_md": false 00:22:07.785 }, 00:22:07.785 "memory_domains": [ 00:22:07.785 { 00:22:07.785 "dma_device_id": "system", 00:22:07.785 "dma_device_type": 1 00:22:07.785 }, 00:22:07.785 { 00:22:07.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:07.785 "dma_device_type": 2 00:22:07.785 } 00:22:07.785 ], 00:22:07.785 "driver_specific": {} 00:22:07.785 } 00:22:07.785 ] 00:22:07.785 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.785 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:07.785 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:07.785 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:07.785 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:22:07.785 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:07.785 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:07.785 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:07.785 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:07.785 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:07.785 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:07.785 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:07.785 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:07.785 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:07.785 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:07.785 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:07.785 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.785 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.785 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.785 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:07.785 "name": "Existed_Raid", 00:22:07.785 "uuid": "09e41458-b3f3-4d4e-8b2a-2adb73a308f7", 00:22:07.785 "strip_size_kb": 64, 00:22:07.785 "state": "online", 00:22:07.785 "raid_level": "concat", 00:22:07.785 "superblock": true, 00:22:07.785 "num_base_bdevs": 4, 00:22:07.785 "num_base_bdevs_discovered": 4, 00:22:07.785 "num_base_bdevs_operational": 4, 00:22:07.785 "base_bdevs_list": [ 00:22:07.785 { 00:22:07.785 "name": "BaseBdev1", 00:22:07.785 "uuid": "a1723749-cce0-4b39-9d2d-b4d996923bce", 00:22:07.785 "is_configured": true, 00:22:07.785 "data_offset": 2048, 00:22:07.785 "data_size": 63488 00:22:07.785 }, 00:22:07.785 { 00:22:07.785 "name": "BaseBdev2", 00:22:07.785 "uuid": "1a9c60ff-f95e-43fd-b4bb-8c353b791006", 00:22:07.785 "is_configured": true, 00:22:07.785 "data_offset": 2048, 00:22:07.785 "data_size": 63488 00:22:07.785 }, 00:22:07.785 { 00:22:07.785 "name": "BaseBdev3", 00:22:07.785 "uuid": "d338cc6b-ad78-4df6-9dfb-80f726234422", 00:22:07.785 "is_configured": true, 00:22:07.785 "data_offset": 2048, 00:22:07.785 "data_size": 63488 00:22:07.785 }, 00:22:07.785 { 00:22:07.785 "name": "BaseBdev4", 00:22:07.785 "uuid": "37d0e30a-3f77-4489-bac8-782478547808", 00:22:07.785 "is_configured": true, 00:22:07.785 "data_offset": 2048, 00:22:07.785 "data_size": 63488 00:22:07.785 } 00:22:07.785 ] 00:22:07.785 }' 00:22:07.785 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:07.785 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.047 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:08.047 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:08.047 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:08.047 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:08.047 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:22:08.047 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:08.047 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:08.047 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:08.047 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.047 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.047 [2024-12-05 12:53:50.415194] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:08.047 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.047 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:08.047 "name": "Existed_Raid", 00:22:08.047 "aliases": [ 00:22:08.047 "09e41458-b3f3-4d4e-8b2a-2adb73a308f7" 00:22:08.047 ], 00:22:08.047 "product_name": "Raid Volume", 00:22:08.047 "block_size": 512, 00:22:08.047 "num_blocks": 253952, 00:22:08.047 "uuid": "09e41458-b3f3-4d4e-8b2a-2adb73a308f7", 00:22:08.047 "assigned_rate_limits": { 00:22:08.047 "rw_ios_per_sec": 0, 00:22:08.047 "rw_mbytes_per_sec": 0, 00:22:08.047 "r_mbytes_per_sec": 0, 00:22:08.047 "w_mbytes_per_sec": 0 00:22:08.047 }, 00:22:08.047 "claimed": false, 00:22:08.047 "zoned": false, 00:22:08.047 "supported_io_types": { 00:22:08.047 "read": true, 00:22:08.047 "write": true, 00:22:08.047 "unmap": true, 00:22:08.047 "flush": true, 00:22:08.047 "reset": true, 00:22:08.047 "nvme_admin": false, 00:22:08.047 "nvme_io": false, 00:22:08.047 "nvme_io_md": false, 00:22:08.047 "write_zeroes": true, 00:22:08.047 "zcopy": false, 00:22:08.047 "get_zone_info": false, 00:22:08.047 "zone_management": false, 00:22:08.047 "zone_append": false, 00:22:08.047 "compare": false, 00:22:08.047 "compare_and_write": false, 00:22:08.047 "abort": false, 00:22:08.047 "seek_hole": false, 00:22:08.047 "seek_data": false, 00:22:08.047 "copy": false, 00:22:08.047 "nvme_iov_md": false 00:22:08.047 }, 00:22:08.047 "memory_domains": [ 00:22:08.047 { 00:22:08.047 "dma_device_id": "system", 00:22:08.047 "dma_device_type": 1 00:22:08.047 }, 00:22:08.047 { 00:22:08.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.047 "dma_device_type": 2 00:22:08.047 }, 00:22:08.047 { 00:22:08.047 "dma_device_id": "system", 00:22:08.047 "dma_device_type": 1 00:22:08.047 }, 00:22:08.047 { 00:22:08.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.047 "dma_device_type": 2 00:22:08.047 }, 00:22:08.047 { 00:22:08.047 "dma_device_id": "system", 00:22:08.047 "dma_device_type": 1 00:22:08.047 }, 00:22:08.047 { 00:22:08.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.047 "dma_device_type": 2 00:22:08.047 }, 00:22:08.047 { 00:22:08.047 "dma_device_id": "system", 00:22:08.047 "dma_device_type": 1 00:22:08.047 }, 00:22:08.047 { 00:22:08.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.047 "dma_device_type": 2 00:22:08.047 } 00:22:08.047 ], 00:22:08.047 "driver_specific": { 00:22:08.047 "raid": { 00:22:08.047 "uuid": "09e41458-b3f3-4d4e-8b2a-2adb73a308f7", 00:22:08.047 "strip_size_kb": 64, 00:22:08.047 "state": "online", 00:22:08.047 "raid_level": "concat", 00:22:08.047 "superblock": true, 00:22:08.047 "num_base_bdevs": 4, 00:22:08.047 "num_base_bdevs_discovered": 4, 00:22:08.047 "num_base_bdevs_operational": 4, 00:22:08.047 "base_bdevs_list": [ 00:22:08.047 { 00:22:08.047 "name": "BaseBdev1", 00:22:08.047 "uuid": "a1723749-cce0-4b39-9d2d-b4d996923bce", 00:22:08.047 "is_configured": true, 00:22:08.047 "data_offset": 2048, 00:22:08.047 "data_size": 63488 00:22:08.047 }, 00:22:08.047 { 00:22:08.047 "name": "BaseBdev2", 00:22:08.047 "uuid": "1a9c60ff-f95e-43fd-b4bb-8c353b791006", 00:22:08.047 "is_configured": true, 00:22:08.047 "data_offset": 2048, 00:22:08.047 "data_size": 63488 00:22:08.047 }, 00:22:08.047 { 00:22:08.047 "name": "BaseBdev3", 00:22:08.047 "uuid": "d338cc6b-ad78-4df6-9dfb-80f726234422", 00:22:08.047 "is_configured": true, 00:22:08.047 "data_offset": 2048, 00:22:08.047 "data_size": 63488 00:22:08.047 }, 00:22:08.047 { 00:22:08.047 "name": "BaseBdev4", 00:22:08.047 "uuid": "37d0e30a-3f77-4489-bac8-782478547808", 00:22:08.047 "is_configured": true, 00:22:08.047 "data_offset": 2048, 00:22:08.047 "data_size": 63488 00:22:08.047 } 00:22:08.047 ] 00:22:08.047 } 00:22:08.047 } 00:22:08.047 }' 00:22:08.047 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:08.047 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:08.047 BaseBdev2 00:22:08.047 BaseBdev3 00:22:08.047 BaseBdev4' 00:22:08.047 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:08.047 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:08.047 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:08.047 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:08.047 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:08.047 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.047 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.047 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.047 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:08.047 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:08.047 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:08.047 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:08.047 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.047 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.047 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:08.047 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.047 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:08.047 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:08.047 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:08.047 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:08.047 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.047 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.047 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:08.047 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.048 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:08.048 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:08.048 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:08.048 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:22:08.048 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.048 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.048 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:08.048 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.048 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:08.048 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:08.048 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:08.048 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.048 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.048 [2024-12-05 12:53:50.622982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:08.048 [2024-12-05 12:53:50.623005] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:08.048 [2024-12-05 12:53:50.623042] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:08.308 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.308 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:08.308 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:22:08.308 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:08.308 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:22:08.308 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:22:08.308 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:22:08.308 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:08.308 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:22:08.308 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:08.308 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:08.308 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:08.308 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:08.308 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:08.308 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:08.308 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:08.308 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.308 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:08.308 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.308 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.308 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.308 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:08.308 "name": "Existed_Raid", 00:22:08.308 "uuid": "09e41458-b3f3-4d4e-8b2a-2adb73a308f7", 00:22:08.308 "strip_size_kb": 64, 00:22:08.308 "state": "offline", 00:22:08.308 "raid_level": "concat", 00:22:08.308 "superblock": true, 00:22:08.308 "num_base_bdevs": 4, 00:22:08.308 "num_base_bdevs_discovered": 3, 00:22:08.308 "num_base_bdevs_operational": 3, 00:22:08.308 "base_bdevs_list": [ 00:22:08.308 { 00:22:08.308 "name": null, 00:22:08.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.308 "is_configured": false, 00:22:08.308 "data_offset": 0, 00:22:08.308 "data_size": 63488 00:22:08.308 }, 00:22:08.308 { 00:22:08.308 "name": "BaseBdev2", 00:22:08.308 "uuid": "1a9c60ff-f95e-43fd-b4bb-8c353b791006", 00:22:08.308 "is_configured": true, 00:22:08.308 "data_offset": 2048, 00:22:08.308 "data_size": 63488 00:22:08.308 }, 00:22:08.308 { 00:22:08.308 "name": "BaseBdev3", 00:22:08.308 "uuid": "d338cc6b-ad78-4df6-9dfb-80f726234422", 00:22:08.308 "is_configured": true, 00:22:08.308 "data_offset": 2048, 00:22:08.308 "data_size": 63488 00:22:08.308 }, 00:22:08.308 { 00:22:08.308 "name": "BaseBdev4", 00:22:08.308 "uuid": "37d0e30a-3f77-4489-bac8-782478547808", 00:22:08.308 "is_configured": true, 00:22:08.308 "data_offset": 2048, 00:22:08.308 "data_size": 63488 00:22:08.308 } 00:22:08.308 ] 00:22:08.308 }' 00:22:08.308 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:08.308 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.568 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:08.568 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:08.568 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.568 12:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:08.568 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.568 12:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.568 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.568 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:08.568 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:08.568 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:08.568 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.568 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.568 [2024-12-05 12:53:51.033791] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:08.568 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.568 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:08.568 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:08.568 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:08.568 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.568 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.568 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.568 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.568 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:08.568 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:08.568 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:22:08.568 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.568 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.568 [2024-12-05 12:53:51.121330] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.829 [2024-12-05 12:53:51.204741] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:08.829 [2024-12-05 12:53:51.204782] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.829 BaseBdev2 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.829 [ 00:22:08.829 { 00:22:08.829 "name": "BaseBdev2", 00:22:08.829 "aliases": [ 00:22:08.829 "47edd87c-0edf-4955-96ab-ff0f38fcd51e" 00:22:08.829 ], 00:22:08.829 "product_name": "Malloc disk", 00:22:08.829 "block_size": 512, 00:22:08.829 "num_blocks": 65536, 00:22:08.829 "uuid": "47edd87c-0edf-4955-96ab-ff0f38fcd51e", 00:22:08.829 "assigned_rate_limits": { 00:22:08.829 "rw_ios_per_sec": 0, 00:22:08.829 "rw_mbytes_per_sec": 0, 00:22:08.829 "r_mbytes_per_sec": 0, 00:22:08.829 "w_mbytes_per_sec": 0 00:22:08.829 }, 00:22:08.829 "claimed": false, 00:22:08.829 "zoned": false, 00:22:08.829 "supported_io_types": { 00:22:08.829 "read": true, 00:22:08.829 "write": true, 00:22:08.829 "unmap": true, 00:22:08.829 "flush": true, 00:22:08.829 "reset": true, 00:22:08.829 "nvme_admin": false, 00:22:08.829 "nvme_io": false, 00:22:08.829 "nvme_io_md": false, 00:22:08.829 "write_zeroes": true, 00:22:08.829 "zcopy": true, 00:22:08.829 "get_zone_info": false, 00:22:08.829 "zone_management": false, 00:22:08.829 "zone_append": false, 00:22:08.829 "compare": false, 00:22:08.829 "compare_and_write": false, 00:22:08.829 "abort": true, 00:22:08.829 "seek_hole": false, 00:22:08.829 "seek_data": false, 00:22:08.829 "copy": true, 00:22:08.829 "nvme_iov_md": false 00:22:08.829 }, 00:22:08.829 "memory_domains": [ 00:22:08.829 { 00:22:08.829 "dma_device_id": "system", 00:22:08.829 "dma_device_type": 1 00:22:08.829 }, 00:22:08.829 { 00:22:08.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.829 "dma_device_type": 2 00:22:08.829 } 00:22:08.829 ], 00:22:08.829 "driver_specific": {} 00:22:08.829 } 00:22:08.829 ] 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.829 BaseBdev3 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:08.829 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:08.830 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:08.830 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.830 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.830 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.830 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:08.830 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.830 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.830 [ 00:22:08.830 { 00:22:08.830 "name": "BaseBdev3", 00:22:08.830 "aliases": [ 00:22:08.830 "f185c7fc-3320-4b9a-b53a-8a75bcd6b9e2" 00:22:08.830 ], 00:22:08.830 "product_name": "Malloc disk", 00:22:08.830 "block_size": 512, 00:22:08.830 "num_blocks": 65536, 00:22:08.830 "uuid": "f185c7fc-3320-4b9a-b53a-8a75bcd6b9e2", 00:22:08.830 "assigned_rate_limits": { 00:22:08.830 "rw_ios_per_sec": 0, 00:22:08.830 "rw_mbytes_per_sec": 0, 00:22:08.830 "r_mbytes_per_sec": 0, 00:22:08.830 "w_mbytes_per_sec": 0 00:22:08.830 }, 00:22:08.830 "claimed": false, 00:22:08.830 "zoned": false, 00:22:08.830 "supported_io_types": { 00:22:08.830 "read": true, 00:22:08.830 "write": true, 00:22:08.830 "unmap": true, 00:22:08.830 "flush": true, 00:22:08.830 "reset": true, 00:22:08.830 "nvme_admin": false, 00:22:08.830 "nvme_io": false, 00:22:08.830 "nvme_io_md": false, 00:22:08.830 "write_zeroes": true, 00:22:08.830 "zcopy": true, 00:22:08.830 "get_zone_info": false, 00:22:08.830 "zone_management": false, 00:22:08.830 "zone_append": false, 00:22:08.830 "compare": false, 00:22:08.830 "compare_and_write": false, 00:22:08.830 "abort": true, 00:22:08.830 "seek_hole": false, 00:22:08.830 "seek_data": false, 00:22:08.830 "copy": true, 00:22:08.830 "nvme_iov_md": false 00:22:08.830 }, 00:22:08.830 "memory_domains": [ 00:22:08.830 { 00:22:08.830 "dma_device_id": "system", 00:22:08.830 "dma_device_type": 1 00:22:08.830 }, 00:22:08.830 { 00:22:08.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.830 "dma_device_type": 2 00:22:08.830 } 00:22:08.830 ], 00:22:08.830 "driver_specific": {} 00:22:08.830 } 00:22:08.830 ] 00:22:08.830 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.830 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:08.830 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:08.830 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:08.830 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:22:08.830 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.830 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.089 BaseBdev4 00:22:09.089 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.089 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:22:09.089 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:22:09.089 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:09.089 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:09.089 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:09.089 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:09.089 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:09.089 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.089 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.089 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.089 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:09.089 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.089 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.089 [ 00:22:09.089 { 00:22:09.089 "name": "BaseBdev4", 00:22:09.089 "aliases": [ 00:22:09.089 "7d434e1f-71cf-45cb-8302-53164f1364e8" 00:22:09.089 ], 00:22:09.089 "product_name": "Malloc disk", 00:22:09.089 "block_size": 512, 00:22:09.089 "num_blocks": 65536, 00:22:09.089 "uuid": "7d434e1f-71cf-45cb-8302-53164f1364e8", 00:22:09.089 "assigned_rate_limits": { 00:22:09.089 "rw_ios_per_sec": 0, 00:22:09.089 "rw_mbytes_per_sec": 0, 00:22:09.089 "r_mbytes_per_sec": 0, 00:22:09.089 "w_mbytes_per_sec": 0 00:22:09.089 }, 00:22:09.089 "claimed": false, 00:22:09.089 "zoned": false, 00:22:09.089 "supported_io_types": { 00:22:09.089 "read": true, 00:22:09.089 "write": true, 00:22:09.089 "unmap": true, 00:22:09.089 "flush": true, 00:22:09.089 "reset": true, 00:22:09.089 "nvme_admin": false, 00:22:09.089 "nvme_io": false, 00:22:09.089 "nvme_io_md": false, 00:22:09.089 "write_zeroes": true, 00:22:09.089 "zcopy": true, 00:22:09.089 "get_zone_info": false, 00:22:09.089 "zone_management": false, 00:22:09.089 "zone_append": false, 00:22:09.089 "compare": false, 00:22:09.089 "compare_and_write": false, 00:22:09.089 "abort": true, 00:22:09.089 "seek_hole": false, 00:22:09.089 "seek_data": false, 00:22:09.089 "copy": true, 00:22:09.089 "nvme_iov_md": false 00:22:09.089 }, 00:22:09.089 "memory_domains": [ 00:22:09.089 { 00:22:09.089 "dma_device_id": "system", 00:22:09.089 "dma_device_type": 1 00:22:09.089 }, 00:22:09.089 { 00:22:09.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.089 "dma_device_type": 2 00:22:09.089 } 00:22:09.089 ], 00:22:09.089 "driver_specific": {} 00:22:09.089 } 00:22:09.089 ] 00:22:09.089 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.089 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:09.089 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:09.089 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:09.089 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:09.089 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.089 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.089 [2024-12-05 12:53:51.449104] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:09.089 [2024-12-05 12:53:51.449248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:09.089 [2024-12-05 12:53:51.449314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:09.089 [2024-12-05 12:53:51.450971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:09.089 [2024-12-05 12:53:51.451089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:09.089 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.089 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:09.089 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:09.089 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:09.089 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:09.089 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:09.089 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:09.089 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:09.089 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:09.089 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:09.089 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:09.089 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:09.089 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.089 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.089 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.089 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.089 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:09.089 "name": "Existed_Raid", 00:22:09.089 "uuid": "c4c8d1f0-84f2-4b2d-959c-59f3c4303be6", 00:22:09.089 "strip_size_kb": 64, 00:22:09.089 "state": "configuring", 00:22:09.089 "raid_level": "concat", 00:22:09.089 "superblock": true, 00:22:09.089 "num_base_bdevs": 4, 00:22:09.089 "num_base_bdevs_discovered": 3, 00:22:09.089 "num_base_bdevs_operational": 4, 00:22:09.089 "base_bdevs_list": [ 00:22:09.089 { 00:22:09.089 "name": "BaseBdev1", 00:22:09.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.089 "is_configured": false, 00:22:09.089 "data_offset": 0, 00:22:09.089 "data_size": 0 00:22:09.089 }, 00:22:09.089 { 00:22:09.089 "name": "BaseBdev2", 00:22:09.089 "uuid": "47edd87c-0edf-4955-96ab-ff0f38fcd51e", 00:22:09.089 "is_configured": true, 00:22:09.089 "data_offset": 2048, 00:22:09.089 "data_size": 63488 00:22:09.089 }, 00:22:09.089 { 00:22:09.089 "name": "BaseBdev3", 00:22:09.089 "uuid": "f185c7fc-3320-4b9a-b53a-8a75bcd6b9e2", 00:22:09.089 "is_configured": true, 00:22:09.090 "data_offset": 2048, 00:22:09.090 "data_size": 63488 00:22:09.090 }, 00:22:09.090 { 00:22:09.090 "name": "BaseBdev4", 00:22:09.090 "uuid": "7d434e1f-71cf-45cb-8302-53164f1364e8", 00:22:09.090 "is_configured": true, 00:22:09.090 "data_offset": 2048, 00:22:09.090 "data_size": 63488 00:22:09.090 } 00:22:09.090 ] 00:22:09.090 }' 00:22:09.090 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:09.090 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.349 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:22:09.349 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.349 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.349 [2024-12-05 12:53:51.789153] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:09.349 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.349 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:09.349 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:09.349 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:09.349 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:09.349 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:09.349 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:09.349 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:09.349 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:09.349 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:09.349 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:09.349 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.349 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:09.349 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.349 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.349 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.349 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:09.349 "name": "Existed_Raid", 00:22:09.349 "uuid": "c4c8d1f0-84f2-4b2d-959c-59f3c4303be6", 00:22:09.349 "strip_size_kb": 64, 00:22:09.349 "state": "configuring", 00:22:09.349 "raid_level": "concat", 00:22:09.349 "superblock": true, 00:22:09.349 "num_base_bdevs": 4, 00:22:09.349 "num_base_bdevs_discovered": 2, 00:22:09.349 "num_base_bdevs_operational": 4, 00:22:09.349 "base_bdevs_list": [ 00:22:09.349 { 00:22:09.349 "name": "BaseBdev1", 00:22:09.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.349 "is_configured": false, 00:22:09.349 "data_offset": 0, 00:22:09.349 "data_size": 0 00:22:09.349 }, 00:22:09.349 { 00:22:09.349 "name": null, 00:22:09.349 "uuid": "47edd87c-0edf-4955-96ab-ff0f38fcd51e", 00:22:09.349 "is_configured": false, 00:22:09.349 "data_offset": 0, 00:22:09.349 "data_size": 63488 00:22:09.349 }, 00:22:09.349 { 00:22:09.349 "name": "BaseBdev3", 00:22:09.349 "uuid": "f185c7fc-3320-4b9a-b53a-8a75bcd6b9e2", 00:22:09.349 "is_configured": true, 00:22:09.349 "data_offset": 2048, 00:22:09.349 "data_size": 63488 00:22:09.349 }, 00:22:09.349 { 00:22:09.349 "name": "BaseBdev4", 00:22:09.349 "uuid": "7d434e1f-71cf-45cb-8302-53164f1364e8", 00:22:09.349 "is_configured": true, 00:22:09.349 "data_offset": 2048, 00:22:09.349 "data_size": 63488 00:22:09.349 } 00:22:09.349 ] 00:22:09.349 }' 00:22:09.349 12:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:09.349 12:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.658 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:09.658 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.658 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.658 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.658 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.658 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:22:09.658 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:09.658 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.658 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.658 [2024-12-05 12:53:52.151967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:09.658 BaseBdev1 00:22:09.658 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.658 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:22:09.658 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:09.658 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:09.658 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:09.658 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:09.658 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:09.658 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:09.658 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.658 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.658 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.658 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:09.658 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.658 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.658 [ 00:22:09.658 { 00:22:09.658 "name": "BaseBdev1", 00:22:09.658 "aliases": [ 00:22:09.658 "90619033-15d2-4d40-8618-c0454e91faf2" 00:22:09.658 ], 00:22:09.658 "product_name": "Malloc disk", 00:22:09.658 "block_size": 512, 00:22:09.658 "num_blocks": 65536, 00:22:09.658 "uuid": "90619033-15d2-4d40-8618-c0454e91faf2", 00:22:09.658 "assigned_rate_limits": { 00:22:09.658 "rw_ios_per_sec": 0, 00:22:09.658 "rw_mbytes_per_sec": 0, 00:22:09.658 "r_mbytes_per_sec": 0, 00:22:09.658 "w_mbytes_per_sec": 0 00:22:09.658 }, 00:22:09.658 "claimed": true, 00:22:09.658 "claim_type": "exclusive_write", 00:22:09.658 "zoned": false, 00:22:09.658 "supported_io_types": { 00:22:09.658 "read": true, 00:22:09.658 "write": true, 00:22:09.658 "unmap": true, 00:22:09.659 "flush": true, 00:22:09.659 "reset": true, 00:22:09.659 "nvme_admin": false, 00:22:09.659 "nvme_io": false, 00:22:09.659 "nvme_io_md": false, 00:22:09.659 "write_zeroes": true, 00:22:09.659 "zcopy": true, 00:22:09.659 "get_zone_info": false, 00:22:09.659 "zone_management": false, 00:22:09.659 "zone_append": false, 00:22:09.659 "compare": false, 00:22:09.659 "compare_and_write": false, 00:22:09.659 "abort": true, 00:22:09.659 "seek_hole": false, 00:22:09.659 "seek_data": false, 00:22:09.659 "copy": true, 00:22:09.659 "nvme_iov_md": false 00:22:09.659 }, 00:22:09.659 "memory_domains": [ 00:22:09.659 { 00:22:09.659 "dma_device_id": "system", 00:22:09.659 "dma_device_type": 1 00:22:09.659 }, 00:22:09.659 { 00:22:09.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.659 "dma_device_type": 2 00:22:09.659 } 00:22:09.659 ], 00:22:09.659 "driver_specific": {} 00:22:09.659 } 00:22:09.659 ] 00:22:09.659 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.659 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:09.659 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:09.659 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:09.659 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:09.659 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:09.659 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:09.659 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:09.659 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:09.659 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:09.659 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:09.659 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:09.659 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.659 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:09.659 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.659 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.659 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.659 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:09.659 "name": "Existed_Raid", 00:22:09.659 "uuid": "c4c8d1f0-84f2-4b2d-959c-59f3c4303be6", 00:22:09.659 "strip_size_kb": 64, 00:22:09.659 "state": "configuring", 00:22:09.659 "raid_level": "concat", 00:22:09.659 "superblock": true, 00:22:09.659 "num_base_bdevs": 4, 00:22:09.659 "num_base_bdevs_discovered": 3, 00:22:09.659 "num_base_bdevs_operational": 4, 00:22:09.659 "base_bdevs_list": [ 00:22:09.659 { 00:22:09.659 "name": "BaseBdev1", 00:22:09.659 "uuid": "90619033-15d2-4d40-8618-c0454e91faf2", 00:22:09.659 "is_configured": true, 00:22:09.659 "data_offset": 2048, 00:22:09.659 "data_size": 63488 00:22:09.659 }, 00:22:09.659 { 00:22:09.659 "name": null, 00:22:09.659 "uuid": "47edd87c-0edf-4955-96ab-ff0f38fcd51e", 00:22:09.659 "is_configured": false, 00:22:09.659 "data_offset": 0, 00:22:09.659 "data_size": 63488 00:22:09.659 }, 00:22:09.659 { 00:22:09.659 "name": "BaseBdev3", 00:22:09.659 "uuid": "f185c7fc-3320-4b9a-b53a-8a75bcd6b9e2", 00:22:09.659 "is_configured": true, 00:22:09.659 "data_offset": 2048, 00:22:09.659 "data_size": 63488 00:22:09.659 }, 00:22:09.659 { 00:22:09.659 "name": "BaseBdev4", 00:22:09.659 "uuid": "7d434e1f-71cf-45cb-8302-53164f1364e8", 00:22:09.659 "is_configured": true, 00:22:09.659 "data_offset": 2048, 00:22:09.659 "data_size": 63488 00:22:09.659 } 00:22:09.659 ] 00:22:09.659 }' 00:22:09.659 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:09.659 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.929 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.929 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:09.929 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.929 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.929 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.929 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:22:09.929 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:22:09.929 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.929 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.929 [2024-12-05 12:53:52.500151] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:09.929 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.929 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:09.929 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:09.929 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:09.929 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:09.929 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:09.929 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:09.929 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:09.929 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:09.929 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:09.929 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:09.929 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:09.929 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.929 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.929 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.188 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.188 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:10.188 "name": "Existed_Raid", 00:22:10.188 "uuid": "c4c8d1f0-84f2-4b2d-959c-59f3c4303be6", 00:22:10.188 "strip_size_kb": 64, 00:22:10.188 "state": "configuring", 00:22:10.188 "raid_level": "concat", 00:22:10.188 "superblock": true, 00:22:10.188 "num_base_bdevs": 4, 00:22:10.188 "num_base_bdevs_discovered": 2, 00:22:10.188 "num_base_bdevs_operational": 4, 00:22:10.188 "base_bdevs_list": [ 00:22:10.188 { 00:22:10.188 "name": "BaseBdev1", 00:22:10.188 "uuid": "90619033-15d2-4d40-8618-c0454e91faf2", 00:22:10.188 "is_configured": true, 00:22:10.188 "data_offset": 2048, 00:22:10.188 "data_size": 63488 00:22:10.188 }, 00:22:10.188 { 00:22:10.188 "name": null, 00:22:10.188 "uuid": "47edd87c-0edf-4955-96ab-ff0f38fcd51e", 00:22:10.188 "is_configured": false, 00:22:10.188 "data_offset": 0, 00:22:10.188 "data_size": 63488 00:22:10.188 }, 00:22:10.188 { 00:22:10.188 "name": null, 00:22:10.188 "uuid": "f185c7fc-3320-4b9a-b53a-8a75bcd6b9e2", 00:22:10.188 "is_configured": false, 00:22:10.188 "data_offset": 0, 00:22:10.188 "data_size": 63488 00:22:10.188 }, 00:22:10.188 { 00:22:10.188 "name": "BaseBdev4", 00:22:10.188 "uuid": "7d434e1f-71cf-45cb-8302-53164f1364e8", 00:22:10.188 "is_configured": true, 00:22:10.188 "data_offset": 2048, 00:22:10.188 "data_size": 63488 00:22:10.188 } 00:22:10.188 ] 00:22:10.188 }' 00:22:10.188 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:10.188 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.449 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.449 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.449 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.449 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:10.449 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.449 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:22:10.449 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:10.449 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.449 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.449 [2024-12-05 12:53:52.840210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:10.449 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.449 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:10.449 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:10.449 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:10.449 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:10.449 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:10.449 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:10.449 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:10.449 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:10.449 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:10.449 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:10.449 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.449 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.449 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.449 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:10.449 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.449 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:10.449 "name": "Existed_Raid", 00:22:10.449 "uuid": "c4c8d1f0-84f2-4b2d-959c-59f3c4303be6", 00:22:10.449 "strip_size_kb": 64, 00:22:10.449 "state": "configuring", 00:22:10.449 "raid_level": "concat", 00:22:10.449 "superblock": true, 00:22:10.449 "num_base_bdevs": 4, 00:22:10.449 "num_base_bdevs_discovered": 3, 00:22:10.449 "num_base_bdevs_operational": 4, 00:22:10.449 "base_bdevs_list": [ 00:22:10.449 { 00:22:10.449 "name": "BaseBdev1", 00:22:10.449 "uuid": "90619033-15d2-4d40-8618-c0454e91faf2", 00:22:10.449 "is_configured": true, 00:22:10.449 "data_offset": 2048, 00:22:10.449 "data_size": 63488 00:22:10.449 }, 00:22:10.449 { 00:22:10.449 "name": null, 00:22:10.449 "uuid": "47edd87c-0edf-4955-96ab-ff0f38fcd51e", 00:22:10.449 "is_configured": false, 00:22:10.449 "data_offset": 0, 00:22:10.449 "data_size": 63488 00:22:10.449 }, 00:22:10.449 { 00:22:10.449 "name": "BaseBdev3", 00:22:10.449 "uuid": "f185c7fc-3320-4b9a-b53a-8a75bcd6b9e2", 00:22:10.449 "is_configured": true, 00:22:10.449 "data_offset": 2048, 00:22:10.449 "data_size": 63488 00:22:10.449 }, 00:22:10.449 { 00:22:10.449 "name": "BaseBdev4", 00:22:10.449 "uuid": "7d434e1f-71cf-45cb-8302-53164f1364e8", 00:22:10.449 "is_configured": true, 00:22:10.449 "data_offset": 2048, 00:22:10.449 "data_size": 63488 00:22:10.449 } 00:22:10.449 ] 00:22:10.449 }' 00:22:10.449 12:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:10.449 12:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.712 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:10.712 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.712 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.712 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.712 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.712 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:22:10.712 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:10.712 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.712 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.712 [2024-12-05 12:53:53.160336] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:10.712 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.712 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:10.712 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:10.712 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:10.712 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:10.712 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:10.712 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:10.712 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:10.712 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:10.712 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:10.712 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:10.712 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.712 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.712 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.712 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:10.712 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.712 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:10.712 "name": "Existed_Raid", 00:22:10.712 "uuid": "c4c8d1f0-84f2-4b2d-959c-59f3c4303be6", 00:22:10.712 "strip_size_kb": 64, 00:22:10.712 "state": "configuring", 00:22:10.712 "raid_level": "concat", 00:22:10.712 "superblock": true, 00:22:10.712 "num_base_bdevs": 4, 00:22:10.712 "num_base_bdevs_discovered": 2, 00:22:10.712 "num_base_bdevs_operational": 4, 00:22:10.712 "base_bdevs_list": [ 00:22:10.712 { 00:22:10.712 "name": null, 00:22:10.712 "uuid": "90619033-15d2-4d40-8618-c0454e91faf2", 00:22:10.712 "is_configured": false, 00:22:10.712 "data_offset": 0, 00:22:10.712 "data_size": 63488 00:22:10.712 }, 00:22:10.712 { 00:22:10.712 "name": null, 00:22:10.712 "uuid": "47edd87c-0edf-4955-96ab-ff0f38fcd51e", 00:22:10.712 "is_configured": false, 00:22:10.712 "data_offset": 0, 00:22:10.712 "data_size": 63488 00:22:10.712 }, 00:22:10.712 { 00:22:10.712 "name": "BaseBdev3", 00:22:10.712 "uuid": "f185c7fc-3320-4b9a-b53a-8a75bcd6b9e2", 00:22:10.712 "is_configured": true, 00:22:10.712 "data_offset": 2048, 00:22:10.712 "data_size": 63488 00:22:10.712 }, 00:22:10.712 { 00:22:10.712 "name": "BaseBdev4", 00:22:10.712 "uuid": "7d434e1f-71cf-45cb-8302-53164f1364e8", 00:22:10.712 "is_configured": true, 00:22:10.712 "data_offset": 2048, 00:22:10.712 "data_size": 63488 00:22:10.712 } 00:22:10.712 ] 00:22:10.712 }' 00:22:10.712 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:10.712 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.972 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.972 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.972 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:10.972 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.972 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.231 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:22:11.231 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:11.231 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.231 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.231 [2024-12-05 12:53:53.564810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:11.231 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.231 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:11.231 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:11.231 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:11.231 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:11.231 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:11.231 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:11.231 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:11.231 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:11.231 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:11.231 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:11.231 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.231 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.231 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:11.231 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.231 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.231 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:11.231 "name": "Existed_Raid", 00:22:11.231 "uuid": "c4c8d1f0-84f2-4b2d-959c-59f3c4303be6", 00:22:11.231 "strip_size_kb": 64, 00:22:11.231 "state": "configuring", 00:22:11.231 "raid_level": "concat", 00:22:11.231 "superblock": true, 00:22:11.231 "num_base_bdevs": 4, 00:22:11.231 "num_base_bdevs_discovered": 3, 00:22:11.231 "num_base_bdevs_operational": 4, 00:22:11.231 "base_bdevs_list": [ 00:22:11.231 { 00:22:11.231 "name": null, 00:22:11.231 "uuid": "90619033-15d2-4d40-8618-c0454e91faf2", 00:22:11.231 "is_configured": false, 00:22:11.231 "data_offset": 0, 00:22:11.231 "data_size": 63488 00:22:11.231 }, 00:22:11.231 { 00:22:11.231 "name": "BaseBdev2", 00:22:11.231 "uuid": "47edd87c-0edf-4955-96ab-ff0f38fcd51e", 00:22:11.231 "is_configured": true, 00:22:11.231 "data_offset": 2048, 00:22:11.231 "data_size": 63488 00:22:11.231 }, 00:22:11.231 { 00:22:11.231 "name": "BaseBdev3", 00:22:11.231 "uuid": "f185c7fc-3320-4b9a-b53a-8a75bcd6b9e2", 00:22:11.231 "is_configured": true, 00:22:11.231 "data_offset": 2048, 00:22:11.231 "data_size": 63488 00:22:11.231 }, 00:22:11.231 { 00:22:11.231 "name": "BaseBdev4", 00:22:11.231 "uuid": "7d434e1f-71cf-45cb-8302-53164f1364e8", 00:22:11.231 "is_configured": true, 00:22:11.231 "data_offset": 2048, 00:22:11.231 "data_size": 63488 00:22:11.231 } 00:22:11.231 ] 00:22:11.231 }' 00:22:11.231 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:11.231 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.491 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.491 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.491 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:11.491 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.491 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.491 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:22:11.491 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.491 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.491 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:11.491 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.491 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.491 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 90619033-15d2-4d40-8618-c0454e91faf2 00:22:11.491 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.491 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.491 [2024-12-05 12:53:53.939532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:11.491 [2024-12-05 12:53:53.939710] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:11.491 [2024-12-05 12:53:53.939720] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:11.491 NewBaseBdev 00:22:11.491 [2024-12-05 12:53:53.939943] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:22:11.491 [2024-12-05 12:53:53.940048] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:11.491 [2024-12-05 12:53:53.940062] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:22:11.491 [2024-12-05 12:53:53.940157] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:11.491 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.491 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:22:11.491 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:22:11.491 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:11.491 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:11.491 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:11.491 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:11.491 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:11.491 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.491 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.491 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.491 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:11.491 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.491 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.491 [ 00:22:11.491 { 00:22:11.491 "name": "NewBaseBdev", 00:22:11.491 "aliases": [ 00:22:11.491 "90619033-15d2-4d40-8618-c0454e91faf2" 00:22:11.491 ], 00:22:11.491 "product_name": "Malloc disk", 00:22:11.491 "block_size": 512, 00:22:11.491 "num_blocks": 65536, 00:22:11.491 "uuid": "90619033-15d2-4d40-8618-c0454e91faf2", 00:22:11.492 "assigned_rate_limits": { 00:22:11.492 "rw_ios_per_sec": 0, 00:22:11.492 "rw_mbytes_per_sec": 0, 00:22:11.492 "r_mbytes_per_sec": 0, 00:22:11.492 "w_mbytes_per_sec": 0 00:22:11.492 }, 00:22:11.492 "claimed": true, 00:22:11.492 "claim_type": "exclusive_write", 00:22:11.492 "zoned": false, 00:22:11.492 "supported_io_types": { 00:22:11.492 "read": true, 00:22:11.492 "write": true, 00:22:11.492 "unmap": true, 00:22:11.492 "flush": true, 00:22:11.492 "reset": true, 00:22:11.492 "nvme_admin": false, 00:22:11.492 "nvme_io": false, 00:22:11.492 "nvme_io_md": false, 00:22:11.492 "write_zeroes": true, 00:22:11.492 "zcopy": true, 00:22:11.492 "get_zone_info": false, 00:22:11.492 "zone_management": false, 00:22:11.492 "zone_append": false, 00:22:11.492 "compare": false, 00:22:11.492 "compare_and_write": false, 00:22:11.492 "abort": true, 00:22:11.492 "seek_hole": false, 00:22:11.492 "seek_data": false, 00:22:11.492 "copy": true, 00:22:11.492 "nvme_iov_md": false 00:22:11.492 }, 00:22:11.492 "memory_domains": [ 00:22:11.492 { 00:22:11.492 "dma_device_id": "system", 00:22:11.492 "dma_device_type": 1 00:22:11.492 }, 00:22:11.492 { 00:22:11.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:11.492 "dma_device_type": 2 00:22:11.492 } 00:22:11.492 ], 00:22:11.492 "driver_specific": {} 00:22:11.492 } 00:22:11.492 ] 00:22:11.492 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.492 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:11.492 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:22:11.492 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:11.492 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:11.492 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:11.492 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:11.492 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:11.492 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:11.492 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:11.492 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:11.492 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:11.492 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.492 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:11.492 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.492 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.492 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.492 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:11.492 "name": "Existed_Raid", 00:22:11.492 "uuid": "c4c8d1f0-84f2-4b2d-959c-59f3c4303be6", 00:22:11.492 "strip_size_kb": 64, 00:22:11.492 "state": "online", 00:22:11.492 "raid_level": "concat", 00:22:11.492 "superblock": true, 00:22:11.492 "num_base_bdevs": 4, 00:22:11.492 "num_base_bdevs_discovered": 4, 00:22:11.492 "num_base_bdevs_operational": 4, 00:22:11.492 "base_bdevs_list": [ 00:22:11.492 { 00:22:11.492 "name": "NewBaseBdev", 00:22:11.492 "uuid": "90619033-15d2-4d40-8618-c0454e91faf2", 00:22:11.492 "is_configured": true, 00:22:11.492 "data_offset": 2048, 00:22:11.492 "data_size": 63488 00:22:11.492 }, 00:22:11.492 { 00:22:11.492 "name": "BaseBdev2", 00:22:11.492 "uuid": "47edd87c-0edf-4955-96ab-ff0f38fcd51e", 00:22:11.492 "is_configured": true, 00:22:11.492 "data_offset": 2048, 00:22:11.492 "data_size": 63488 00:22:11.492 }, 00:22:11.492 { 00:22:11.492 "name": "BaseBdev3", 00:22:11.492 "uuid": "f185c7fc-3320-4b9a-b53a-8a75bcd6b9e2", 00:22:11.492 "is_configured": true, 00:22:11.492 "data_offset": 2048, 00:22:11.492 "data_size": 63488 00:22:11.492 }, 00:22:11.492 { 00:22:11.492 "name": "BaseBdev4", 00:22:11.492 "uuid": "7d434e1f-71cf-45cb-8302-53164f1364e8", 00:22:11.492 "is_configured": true, 00:22:11.492 "data_offset": 2048, 00:22:11.492 "data_size": 63488 00:22:11.492 } 00:22:11.492 ] 00:22:11.492 }' 00:22:11.492 12:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:11.492 12:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.751 12:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:22:11.751 12:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:11.751 12:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:11.751 12:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:11.751 12:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:22:11.751 12:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:11.751 12:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:11.751 12:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:11.751 12:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.751 12:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.751 [2024-12-05 12:53:54.279976] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:11.751 12:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.751 12:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:11.751 "name": "Existed_Raid", 00:22:11.751 "aliases": [ 00:22:11.752 "c4c8d1f0-84f2-4b2d-959c-59f3c4303be6" 00:22:11.752 ], 00:22:11.752 "product_name": "Raid Volume", 00:22:11.752 "block_size": 512, 00:22:11.752 "num_blocks": 253952, 00:22:11.752 "uuid": "c4c8d1f0-84f2-4b2d-959c-59f3c4303be6", 00:22:11.752 "assigned_rate_limits": { 00:22:11.752 "rw_ios_per_sec": 0, 00:22:11.752 "rw_mbytes_per_sec": 0, 00:22:11.752 "r_mbytes_per_sec": 0, 00:22:11.752 "w_mbytes_per_sec": 0 00:22:11.752 }, 00:22:11.752 "claimed": false, 00:22:11.752 "zoned": false, 00:22:11.752 "supported_io_types": { 00:22:11.752 "read": true, 00:22:11.752 "write": true, 00:22:11.752 "unmap": true, 00:22:11.752 "flush": true, 00:22:11.752 "reset": true, 00:22:11.752 "nvme_admin": false, 00:22:11.752 "nvme_io": false, 00:22:11.752 "nvme_io_md": false, 00:22:11.752 "write_zeroes": true, 00:22:11.752 "zcopy": false, 00:22:11.752 "get_zone_info": false, 00:22:11.752 "zone_management": false, 00:22:11.752 "zone_append": false, 00:22:11.752 "compare": false, 00:22:11.752 "compare_and_write": false, 00:22:11.752 "abort": false, 00:22:11.752 "seek_hole": false, 00:22:11.752 "seek_data": false, 00:22:11.752 "copy": false, 00:22:11.752 "nvme_iov_md": false 00:22:11.752 }, 00:22:11.752 "memory_domains": [ 00:22:11.752 { 00:22:11.752 "dma_device_id": "system", 00:22:11.752 "dma_device_type": 1 00:22:11.752 }, 00:22:11.752 { 00:22:11.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:11.752 "dma_device_type": 2 00:22:11.752 }, 00:22:11.752 { 00:22:11.752 "dma_device_id": "system", 00:22:11.752 "dma_device_type": 1 00:22:11.752 }, 00:22:11.752 { 00:22:11.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:11.752 "dma_device_type": 2 00:22:11.752 }, 00:22:11.752 { 00:22:11.752 "dma_device_id": "system", 00:22:11.752 "dma_device_type": 1 00:22:11.752 }, 00:22:11.752 { 00:22:11.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:11.752 "dma_device_type": 2 00:22:11.752 }, 00:22:11.752 { 00:22:11.752 "dma_device_id": "system", 00:22:11.752 "dma_device_type": 1 00:22:11.752 }, 00:22:11.752 { 00:22:11.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:11.752 "dma_device_type": 2 00:22:11.752 } 00:22:11.752 ], 00:22:11.752 "driver_specific": { 00:22:11.752 "raid": { 00:22:11.752 "uuid": "c4c8d1f0-84f2-4b2d-959c-59f3c4303be6", 00:22:11.752 "strip_size_kb": 64, 00:22:11.752 "state": "online", 00:22:11.752 "raid_level": "concat", 00:22:11.752 "superblock": true, 00:22:11.752 "num_base_bdevs": 4, 00:22:11.752 "num_base_bdevs_discovered": 4, 00:22:11.752 "num_base_bdevs_operational": 4, 00:22:11.752 "base_bdevs_list": [ 00:22:11.752 { 00:22:11.752 "name": "NewBaseBdev", 00:22:11.752 "uuid": "90619033-15d2-4d40-8618-c0454e91faf2", 00:22:11.752 "is_configured": true, 00:22:11.752 "data_offset": 2048, 00:22:11.752 "data_size": 63488 00:22:11.752 }, 00:22:11.752 { 00:22:11.752 "name": "BaseBdev2", 00:22:11.752 "uuid": "47edd87c-0edf-4955-96ab-ff0f38fcd51e", 00:22:11.752 "is_configured": true, 00:22:11.752 "data_offset": 2048, 00:22:11.752 "data_size": 63488 00:22:11.752 }, 00:22:11.752 { 00:22:11.752 "name": "BaseBdev3", 00:22:11.752 "uuid": "f185c7fc-3320-4b9a-b53a-8a75bcd6b9e2", 00:22:11.752 "is_configured": true, 00:22:11.752 "data_offset": 2048, 00:22:11.752 "data_size": 63488 00:22:11.752 }, 00:22:11.775 { 00:22:11.775 "name": "BaseBdev4", 00:22:11.775 "uuid": "7d434e1f-71cf-45cb-8302-53164f1364e8", 00:22:11.775 "is_configured": true, 00:22:11.775 "data_offset": 2048, 00:22:11.775 "data_size": 63488 00:22:11.775 } 00:22:11.775 ] 00:22:11.775 } 00:22:11.775 } 00:22:11.775 }' 00:22:11.775 12:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:22:12.036 BaseBdev2 00:22:12.036 BaseBdev3 00:22:12.036 BaseBdev4' 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.036 [2024-12-05 12:53:54.491687] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:12.036 [2024-12-05 12:53:54.491713] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:12.036 [2024-12-05 12:53:54.491786] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:12.036 [2024-12-05 12:53:54.491848] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:12.036 [2024-12-05 12:53:54.491856] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 69932 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 69932 ']' 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 69932 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69932 00:22:12.036 killing process with pid 69932 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69932' 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 69932 00:22:12.036 [2024-12-05 12:53:54.518903] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:12.036 12:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 69932 00:22:12.297 [2024-12-05 12:53:54.729232] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:12.867 ************************************ 00:22:12.867 END TEST raid_state_function_test_sb 00:22:12.867 ************************************ 00:22:12.867 12:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:22:12.867 00:22:12.867 real 0m7.956s 00:22:12.867 user 0m12.815s 00:22:12.867 sys 0m1.234s 00:22:12.867 12:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:12.867 12:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.867 12:53:55 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:22:12.867 12:53:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:12.867 12:53:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:12.867 12:53:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:12.867 ************************************ 00:22:12.867 START TEST raid_superblock_test 00:22:12.867 ************************************ 00:22:12.867 12:53:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:22:12.867 12:53:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:22:12.867 12:53:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:22:12.867 12:53:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:22:12.867 12:53:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:22:12.867 12:53:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:22:12.867 12:53:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:22:12.867 12:53:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:22:12.867 12:53:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:22:12.867 12:53:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:22:12.867 12:53:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:22:12.867 12:53:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:22:12.867 12:53:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:22:12.867 12:53:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:22:12.867 12:53:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:22:12.867 12:53:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:22:12.867 12:53:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:22:12.867 12:53:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70570 00:22:12.867 12:53:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:22:12.867 12:53:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70570 00:22:12.867 12:53:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70570 ']' 00:22:12.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:12.867 12:53:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.867 12:53:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:12.867 12:53:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.867 12:53:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:12.867 12:53:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.867 [2024-12-05 12:53:55.437006] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:22:12.867 [2024-12-05 12:53:55.437130] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70570 ] 00:22:13.127 [2024-12-05 12:53:55.592842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.127 [2024-12-05 12:53:55.706227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.386 [2024-12-05 12:53:55.857257] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:13.386 [2024-12-05 12:53:55.857298] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.957 malloc1 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.957 [2024-12-05 12:53:56.327604] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:13.957 [2024-12-05 12:53:56.327663] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:13.957 [2024-12-05 12:53:56.327685] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:13.957 [2024-12-05 12:53:56.327695] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:13.957 [2024-12-05 12:53:56.329961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:13.957 [2024-12-05 12:53:56.329998] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:13.957 pt1 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.957 malloc2 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.957 [2024-12-05 12:53:56.366501] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:13.957 [2024-12-05 12:53:56.366558] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:13.957 [2024-12-05 12:53:56.366587] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:13.957 [2024-12-05 12:53:56.366597] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:13.957 [2024-12-05 12:53:56.368937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:13.957 [2024-12-05 12:53:56.368969] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:13.957 pt2 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.957 malloc3 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.957 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.958 [2024-12-05 12:53:56.414894] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:13.958 [2024-12-05 12:53:56.414944] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:13.958 [2024-12-05 12:53:56.414962] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:13.958 [2024-12-05 12:53:56.414969] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:13.958 [2024-12-05 12:53:56.416731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:13.958 [2024-12-05 12:53:56.416761] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:13.958 pt3 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.958 malloc4 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.958 [2024-12-05 12:53:56.446916] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:13.958 [2024-12-05 12:53:56.446965] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:13.958 [2024-12-05 12:53:56.446980] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:22:13.958 [2024-12-05 12:53:56.446987] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:13.958 [2024-12-05 12:53:56.448756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:13.958 [2024-12-05 12:53:56.448785] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:13.958 pt4 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.958 [2024-12-05 12:53:56.454938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:13.958 [2024-12-05 12:53:56.456450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:13.958 [2024-12-05 12:53:56.456524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:13.958 [2024-12-05 12:53:56.456563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:13.958 [2024-12-05 12:53:56.456708] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:13.958 [2024-12-05 12:53:56.456720] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:13.958 [2024-12-05 12:53:56.456930] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:13.958 [2024-12-05 12:53:56.457048] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:13.958 [2024-12-05 12:53:56.457057] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:13.958 [2024-12-05 12:53:56.457163] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:13.958 "name": "raid_bdev1", 00:22:13.958 "uuid": "3f34369f-3d75-4757-9643-346b6aab26b9", 00:22:13.958 "strip_size_kb": 64, 00:22:13.958 "state": "online", 00:22:13.958 "raid_level": "concat", 00:22:13.958 "superblock": true, 00:22:13.958 "num_base_bdevs": 4, 00:22:13.958 "num_base_bdevs_discovered": 4, 00:22:13.958 "num_base_bdevs_operational": 4, 00:22:13.958 "base_bdevs_list": [ 00:22:13.958 { 00:22:13.958 "name": "pt1", 00:22:13.958 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:13.958 "is_configured": true, 00:22:13.958 "data_offset": 2048, 00:22:13.958 "data_size": 63488 00:22:13.958 }, 00:22:13.958 { 00:22:13.958 "name": "pt2", 00:22:13.958 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:13.958 "is_configured": true, 00:22:13.958 "data_offset": 2048, 00:22:13.958 "data_size": 63488 00:22:13.958 }, 00:22:13.958 { 00:22:13.958 "name": "pt3", 00:22:13.958 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:13.958 "is_configured": true, 00:22:13.958 "data_offset": 2048, 00:22:13.958 "data_size": 63488 00:22:13.958 }, 00:22:13.958 { 00:22:13.958 "name": "pt4", 00:22:13.958 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:13.958 "is_configured": true, 00:22:13.958 "data_offset": 2048, 00:22:13.958 "data_size": 63488 00:22:13.958 } 00:22:13.958 ] 00:22:13.958 }' 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:13.958 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.219 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:22:14.219 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:14.219 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:14.219 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:14.219 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:14.219 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:14.219 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:14.219 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.219 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:14.219 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.219 [2024-12-05 12:53:56.763342] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:14.219 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.219 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:14.219 "name": "raid_bdev1", 00:22:14.219 "aliases": [ 00:22:14.219 "3f34369f-3d75-4757-9643-346b6aab26b9" 00:22:14.219 ], 00:22:14.219 "product_name": "Raid Volume", 00:22:14.219 "block_size": 512, 00:22:14.219 "num_blocks": 253952, 00:22:14.219 "uuid": "3f34369f-3d75-4757-9643-346b6aab26b9", 00:22:14.219 "assigned_rate_limits": { 00:22:14.219 "rw_ios_per_sec": 0, 00:22:14.219 "rw_mbytes_per_sec": 0, 00:22:14.219 "r_mbytes_per_sec": 0, 00:22:14.219 "w_mbytes_per_sec": 0 00:22:14.219 }, 00:22:14.219 "claimed": false, 00:22:14.219 "zoned": false, 00:22:14.219 "supported_io_types": { 00:22:14.219 "read": true, 00:22:14.219 "write": true, 00:22:14.219 "unmap": true, 00:22:14.219 "flush": true, 00:22:14.219 "reset": true, 00:22:14.219 "nvme_admin": false, 00:22:14.219 "nvme_io": false, 00:22:14.219 "nvme_io_md": false, 00:22:14.219 "write_zeroes": true, 00:22:14.219 "zcopy": false, 00:22:14.219 "get_zone_info": false, 00:22:14.219 "zone_management": false, 00:22:14.219 "zone_append": false, 00:22:14.219 "compare": false, 00:22:14.219 "compare_and_write": false, 00:22:14.219 "abort": false, 00:22:14.219 "seek_hole": false, 00:22:14.219 "seek_data": false, 00:22:14.219 "copy": false, 00:22:14.219 "nvme_iov_md": false 00:22:14.219 }, 00:22:14.219 "memory_domains": [ 00:22:14.219 { 00:22:14.219 "dma_device_id": "system", 00:22:14.219 "dma_device_type": 1 00:22:14.219 }, 00:22:14.219 { 00:22:14.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.219 "dma_device_type": 2 00:22:14.219 }, 00:22:14.219 { 00:22:14.219 "dma_device_id": "system", 00:22:14.219 "dma_device_type": 1 00:22:14.219 }, 00:22:14.219 { 00:22:14.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.219 "dma_device_type": 2 00:22:14.219 }, 00:22:14.219 { 00:22:14.219 "dma_device_id": "system", 00:22:14.219 "dma_device_type": 1 00:22:14.219 }, 00:22:14.219 { 00:22:14.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.219 "dma_device_type": 2 00:22:14.219 }, 00:22:14.219 { 00:22:14.219 "dma_device_id": "system", 00:22:14.219 "dma_device_type": 1 00:22:14.219 }, 00:22:14.219 { 00:22:14.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.219 "dma_device_type": 2 00:22:14.219 } 00:22:14.219 ], 00:22:14.219 "driver_specific": { 00:22:14.219 "raid": { 00:22:14.219 "uuid": "3f34369f-3d75-4757-9643-346b6aab26b9", 00:22:14.219 "strip_size_kb": 64, 00:22:14.219 "state": "online", 00:22:14.219 "raid_level": "concat", 00:22:14.219 "superblock": true, 00:22:14.219 "num_base_bdevs": 4, 00:22:14.219 "num_base_bdevs_discovered": 4, 00:22:14.219 "num_base_bdevs_operational": 4, 00:22:14.219 "base_bdevs_list": [ 00:22:14.219 { 00:22:14.219 "name": "pt1", 00:22:14.219 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:14.219 "is_configured": true, 00:22:14.219 "data_offset": 2048, 00:22:14.219 "data_size": 63488 00:22:14.219 }, 00:22:14.219 { 00:22:14.219 "name": "pt2", 00:22:14.219 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:14.219 "is_configured": true, 00:22:14.219 "data_offset": 2048, 00:22:14.219 "data_size": 63488 00:22:14.219 }, 00:22:14.219 { 00:22:14.219 "name": "pt3", 00:22:14.219 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:14.219 "is_configured": true, 00:22:14.219 "data_offset": 2048, 00:22:14.219 "data_size": 63488 00:22:14.219 }, 00:22:14.219 { 00:22:14.219 "name": "pt4", 00:22:14.219 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:14.219 "is_configured": true, 00:22:14.219 "data_offset": 2048, 00:22:14.219 "data_size": 63488 00:22:14.219 } 00:22:14.219 ] 00:22:14.219 } 00:22:14.219 } 00:22:14.219 }' 00:22:14.219 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:14.544 pt2 00:22:14.544 pt3 00:22:14.544 pt4' 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.544 [2024-12-05 12:53:56.979301] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3f34369f-3d75-4757-9643-346b6aab26b9 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3f34369f-3d75-4757-9643-346b6aab26b9 ']' 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.544 12:53:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.544 [2024-12-05 12:53:57.003022] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:14.544 [2024-12-05 12:53:57.003042] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:14.544 [2024-12-05 12:53:57.003102] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:14.544 [2024-12-05 12:53:57.003162] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:14.544 [2024-12-05 12:53:57.003173] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:14.544 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.544 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.544 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.544 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.544 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:14.544 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.544 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:14.544 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:14.544 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:14.544 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:22:14.544 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.544 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.544 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.545 [2024-12-05 12:53:57.107071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:14.545 [2024-12-05 12:53:57.108659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:14.545 [2024-12-05 12:53:57.108701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:14.545 [2024-12-05 12:53:57.108729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:22:14.545 [2024-12-05 12:53:57.108769] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:14.545 [2024-12-05 12:53:57.108812] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:14.545 [2024-12-05 12:53:57.108828] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:22:14.545 [2024-12-05 12:53:57.108843] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:22:14.545 [2024-12-05 12:53:57.108853] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:14.545 [2024-12-05 12:53:57.108862] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:22:14.545 request: 00:22:14.545 { 00:22:14.545 "name": "raid_bdev1", 00:22:14.545 "raid_level": "concat", 00:22:14.545 "base_bdevs": [ 00:22:14.545 "malloc1", 00:22:14.545 "malloc2", 00:22:14.545 "malloc3", 00:22:14.545 "malloc4" 00:22:14.545 ], 00:22:14.545 "strip_size_kb": 64, 00:22:14.545 "superblock": false, 00:22:14.545 "method": "bdev_raid_create", 00:22:14.545 "req_id": 1 00:22:14.545 } 00:22:14.545 Got JSON-RPC error response 00:22:14.545 response: 00:22:14.545 { 00:22:14.545 "code": -17, 00:22:14.545 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:14.545 } 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:14.545 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.807 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:14.807 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:14.807 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:14.807 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.807 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.807 [2024-12-05 12:53:57.147046] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:14.807 [2024-12-05 12:53:57.147091] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:14.807 [2024-12-05 12:53:57.147107] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:14.807 [2024-12-05 12:53:57.147115] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:14.807 [2024-12-05 12:53:57.148945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:14.807 [2024-12-05 12:53:57.148979] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:14.807 [2024-12-05 12:53:57.149045] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:14.807 [2024-12-05 12:53:57.149089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:14.807 pt1 00:22:14.807 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.807 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:22:14.807 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:14.807 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:14.807 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:14.807 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:14.807 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:14.807 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:14.807 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:14.807 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:14.807 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:14.807 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.807 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.807 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.807 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.807 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.807 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:14.807 "name": "raid_bdev1", 00:22:14.807 "uuid": "3f34369f-3d75-4757-9643-346b6aab26b9", 00:22:14.807 "strip_size_kb": 64, 00:22:14.807 "state": "configuring", 00:22:14.807 "raid_level": "concat", 00:22:14.807 "superblock": true, 00:22:14.807 "num_base_bdevs": 4, 00:22:14.807 "num_base_bdevs_discovered": 1, 00:22:14.807 "num_base_bdevs_operational": 4, 00:22:14.807 "base_bdevs_list": [ 00:22:14.807 { 00:22:14.807 "name": "pt1", 00:22:14.807 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:14.807 "is_configured": true, 00:22:14.807 "data_offset": 2048, 00:22:14.807 "data_size": 63488 00:22:14.807 }, 00:22:14.807 { 00:22:14.807 "name": null, 00:22:14.808 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:14.808 "is_configured": false, 00:22:14.808 "data_offset": 2048, 00:22:14.808 "data_size": 63488 00:22:14.808 }, 00:22:14.808 { 00:22:14.808 "name": null, 00:22:14.808 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:14.808 "is_configured": false, 00:22:14.808 "data_offset": 2048, 00:22:14.808 "data_size": 63488 00:22:14.808 }, 00:22:14.808 { 00:22:14.808 "name": null, 00:22:14.808 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:14.808 "is_configured": false, 00:22:14.808 "data_offset": 2048, 00:22:14.808 "data_size": 63488 00:22:14.808 } 00:22:14.808 ] 00:22:14.808 }' 00:22:14.808 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:14.808 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.069 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:22:15.069 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:15.069 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.069 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.069 [2024-12-05 12:53:57.463134] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:15.069 [2024-12-05 12:53:57.463194] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:15.069 [2024-12-05 12:53:57.463209] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:22:15.069 [2024-12-05 12:53:57.463218] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:15.069 [2024-12-05 12:53:57.463575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:15.069 [2024-12-05 12:53:57.463589] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:15.069 [2024-12-05 12:53:57.463650] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:15.069 [2024-12-05 12:53:57.463667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:15.069 pt2 00:22:15.069 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.069 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:22:15.069 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.069 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.069 [2024-12-05 12:53:57.471147] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:15.069 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.069 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:22:15.069 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:15.069 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:15.069 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:15.069 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:15.069 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:15.069 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:15.069 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:15.069 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:15.069 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:15.069 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.069 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:15.070 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.070 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.070 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.070 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:15.070 "name": "raid_bdev1", 00:22:15.070 "uuid": "3f34369f-3d75-4757-9643-346b6aab26b9", 00:22:15.070 "strip_size_kb": 64, 00:22:15.070 "state": "configuring", 00:22:15.070 "raid_level": "concat", 00:22:15.070 "superblock": true, 00:22:15.070 "num_base_bdevs": 4, 00:22:15.070 "num_base_bdevs_discovered": 1, 00:22:15.070 "num_base_bdevs_operational": 4, 00:22:15.070 "base_bdevs_list": [ 00:22:15.070 { 00:22:15.070 "name": "pt1", 00:22:15.070 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:15.070 "is_configured": true, 00:22:15.070 "data_offset": 2048, 00:22:15.070 "data_size": 63488 00:22:15.070 }, 00:22:15.070 { 00:22:15.070 "name": null, 00:22:15.070 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:15.070 "is_configured": false, 00:22:15.070 "data_offset": 0, 00:22:15.070 "data_size": 63488 00:22:15.070 }, 00:22:15.070 { 00:22:15.070 "name": null, 00:22:15.070 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:15.070 "is_configured": false, 00:22:15.070 "data_offset": 2048, 00:22:15.070 "data_size": 63488 00:22:15.070 }, 00:22:15.070 { 00:22:15.070 "name": null, 00:22:15.070 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:15.070 "is_configured": false, 00:22:15.070 "data_offset": 2048, 00:22:15.070 "data_size": 63488 00:22:15.070 } 00:22:15.070 ] 00:22:15.070 }' 00:22:15.070 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:15.070 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.329 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:15.329 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:15.329 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:15.329 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.329 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.329 [2024-12-05 12:53:57.783197] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:15.329 [2024-12-05 12:53:57.783251] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:15.329 [2024-12-05 12:53:57.783265] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:22:15.329 [2024-12-05 12:53:57.783272] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:15.329 [2024-12-05 12:53:57.783623] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:15.329 [2024-12-05 12:53:57.783639] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:15.329 [2024-12-05 12:53:57.783700] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:15.329 [2024-12-05 12:53:57.783716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:15.329 pt2 00:22:15.329 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.329 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:15.329 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:15.329 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:15.329 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.329 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.329 [2024-12-05 12:53:57.791177] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:15.329 [2024-12-05 12:53:57.791220] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:15.329 [2024-12-05 12:53:57.791235] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:22:15.329 [2024-12-05 12:53:57.791242] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:15.329 [2024-12-05 12:53:57.791575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:15.329 [2024-12-05 12:53:57.791595] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:15.329 [2024-12-05 12:53:57.791649] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:15.329 [2024-12-05 12:53:57.791667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:15.329 pt3 00:22:15.329 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.329 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:15.329 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:15.329 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:15.329 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.329 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.329 [2024-12-05 12:53:57.799159] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:15.329 [2024-12-05 12:53:57.799195] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:15.329 [2024-12-05 12:53:57.799208] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:22:15.329 [2024-12-05 12:53:57.799215] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:15.329 [2024-12-05 12:53:57.799549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:15.329 [2024-12-05 12:53:57.799568] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:15.329 [2024-12-05 12:53:57.799621] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:22:15.329 [2024-12-05 12:53:57.799638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:15.329 [2024-12-05 12:53:57.799745] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:15.329 [2024-12-05 12:53:57.799775] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:15.329 [2024-12-05 12:53:57.799969] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:15.329 [2024-12-05 12:53:57.800074] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:15.329 [2024-12-05 12:53:57.800083] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:22:15.329 [2024-12-05 12:53:57.800178] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:15.329 pt4 00:22:15.330 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.330 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:15.330 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:15.330 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:22:15.330 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:15.330 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:15.330 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:15.330 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:15.330 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:15.330 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:15.330 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:15.330 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:15.330 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:15.330 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.330 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:15.330 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.330 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.330 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.330 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:15.330 "name": "raid_bdev1", 00:22:15.330 "uuid": "3f34369f-3d75-4757-9643-346b6aab26b9", 00:22:15.330 "strip_size_kb": 64, 00:22:15.330 "state": "online", 00:22:15.330 "raid_level": "concat", 00:22:15.330 "superblock": true, 00:22:15.330 "num_base_bdevs": 4, 00:22:15.330 "num_base_bdevs_discovered": 4, 00:22:15.330 "num_base_bdevs_operational": 4, 00:22:15.330 "base_bdevs_list": [ 00:22:15.330 { 00:22:15.330 "name": "pt1", 00:22:15.330 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:15.330 "is_configured": true, 00:22:15.330 "data_offset": 2048, 00:22:15.330 "data_size": 63488 00:22:15.330 }, 00:22:15.330 { 00:22:15.330 "name": "pt2", 00:22:15.330 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:15.330 "is_configured": true, 00:22:15.330 "data_offset": 2048, 00:22:15.330 "data_size": 63488 00:22:15.330 }, 00:22:15.330 { 00:22:15.330 "name": "pt3", 00:22:15.330 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:15.330 "is_configured": true, 00:22:15.330 "data_offset": 2048, 00:22:15.330 "data_size": 63488 00:22:15.330 }, 00:22:15.330 { 00:22:15.330 "name": "pt4", 00:22:15.330 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:15.330 "is_configured": true, 00:22:15.330 "data_offset": 2048, 00:22:15.330 "data_size": 63488 00:22:15.330 } 00:22:15.330 ] 00:22:15.330 }' 00:22:15.330 12:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:15.330 12:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.589 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:22:15.589 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:15.589 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:15.589 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:15.589 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:15.589 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:15.589 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:15.589 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:15.589 12:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.589 12:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.589 [2024-12-05 12:53:58.107529] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:15.589 12:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.589 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:15.589 "name": "raid_bdev1", 00:22:15.589 "aliases": [ 00:22:15.589 "3f34369f-3d75-4757-9643-346b6aab26b9" 00:22:15.589 ], 00:22:15.589 "product_name": "Raid Volume", 00:22:15.589 "block_size": 512, 00:22:15.589 "num_blocks": 253952, 00:22:15.589 "uuid": "3f34369f-3d75-4757-9643-346b6aab26b9", 00:22:15.589 "assigned_rate_limits": { 00:22:15.589 "rw_ios_per_sec": 0, 00:22:15.589 "rw_mbytes_per_sec": 0, 00:22:15.589 "r_mbytes_per_sec": 0, 00:22:15.589 "w_mbytes_per_sec": 0 00:22:15.589 }, 00:22:15.589 "claimed": false, 00:22:15.589 "zoned": false, 00:22:15.589 "supported_io_types": { 00:22:15.589 "read": true, 00:22:15.589 "write": true, 00:22:15.589 "unmap": true, 00:22:15.589 "flush": true, 00:22:15.589 "reset": true, 00:22:15.589 "nvme_admin": false, 00:22:15.589 "nvme_io": false, 00:22:15.589 "nvme_io_md": false, 00:22:15.589 "write_zeroes": true, 00:22:15.589 "zcopy": false, 00:22:15.589 "get_zone_info": false, 00:22:15.589 "zone_management": false, 00:22:15.589 "zone_append": false, 00:22:15.589 "compare": false, 00:22:15.589 "compare_and_write": false, 00:22:15.589 "abort": false, 00:22:15.589 "seek_hole": false, 00:22:15.589 "seek_data": false, 00:22:15.589 "copy": false, 00:22:15.589 "nvme_iov_md": false 00:22:15.589 }, 00:22:15.589 "memory_domains": [ 00:22:15.589 { 00:22:15.589 "dma_device_id": "system", 00:22:15.589 "dma_device_type": 1 00:22:15.589 }, 00:22:15.589 { 00:22:15.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:15.589 "dma_device_type": 2 00:22:15.589 }, 00:22:15.589 { 00:22:15.589 "dma_device_id": "system", 00:22:15.589 "dma_device_type": 1 00:22:15.589 }, 00:22:15.589 { 00:22:15.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:15.589 "dma_device_type": 2 00:22:15.589 }, 00:22:15.589 { 00:22:15.589 "dma_device_id": "system", 00:22:15.589 "dma_device_type": 1 00:22:15.589 }, 00:22:15.589 { 00:22:15.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:15.589 "dma_device_type": 2 00:22:15.589 }, 00:22:15.589 { 00:22:15.589 "dma_device_id": "system", 00:22:15.589 "dma_device_type": 1 00:22:15.589 }, 00:22:15.589 { 00:22:15.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:15.589 "dma_device_type": 2 00:22:15.589 } 00:22:15.589 ], 00:22:15.589 "driver_specific": { 00:22:15.589 "raid": { 00:22:15.590 "uuid": "3f34369f-3d75-4757-9643-346b6aab26b9", 00:22:15.590 "strip_size_kb": 64, 00:22:15.590 "state": "online", 00:22:15.590 "raid_level": "concat", 00:22:15.590 "superblock": true, 00:22:15.590 "num_base_bdevs": 4, 00:22:15.590 "num_base_bdevs_discovered": 4, 00:22:15.590 "num_base_bdevs_operational": 4, 00:22:15.590 "base_bdevs_list": [ 00:22:15.590 { 00:22:15.590 "name": "pt1", 00:22:15.590 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:15.590 "is_configured": true, 00:22:15.590 "data_offset": 2048, 00:22:15.590 "data_size": 63488 00:22:15.590 }, 00:22:15.590 { 00:22:15.590 "name": "pt2", 00:22:15.590 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:15.590 "is_configured": true, 00:22:15.590 "data_offset": 2048, 00:22:15.590 "data_size": 63488 00:22:15.590 }, 00:22:15.590 { 00:22:15.590 "name": "pt3", 00:22:15.590 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:15.590 "is_configured": true, 00:22:15.590 "data_offset": 2048, 00:22:15.590 "data_size": 63488 00:22:15.590 }, 00:22:15.590 { 00:22:15.590 "name": "pt4", 00:22:15.590 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:15.590 "is_configured": true, 00:22:15.590 "data_offset": 2048, 00:22:15.590 "data_size": 63488 00:22:15.590 } 00:22:15.590 ] 00:22:15.590 } 00:22:15.590 } 00:22:15.590 }' 00:22:15.590 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:15.590 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:15.590 pt2 00:22:15.590 pt3 00:22:15.590 pt4' 00:22:15.590 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:15.849 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:15.849 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:15.849 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:15.849 12:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.849 12:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.849 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:15.849 12:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.849 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:15.849 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:15.849 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:22:15.850 [2024-12-05 12:53:58.339556] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3f34369f-3d75-4757-9643-346b6aab26b9 '!=' 3f34369f-3d75-4757-9643-346b6aab26b9 ']' 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70570 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70570 ']' 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70570 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70570 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:15.850 killing process with pid 70570 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70570' 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70570 00:22:15.850 [2024-12-05 12:53:58.386398] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:15.850 12:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70570 00:22:15.850 [2024-12-05 12:53:58.386468] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:15.850 [2024-12-05 12:53:58.386540] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:15.850 [2024-12-05 12:53:58.386552] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:22:16.111 [2024-12-05 12:53:58.585847] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:16.683 12:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:22:16.683 00:22:16.683 real 0m3.803s 00:22:16.683 user 0m5.522s 00:22:16.683 sys 0m0.604s 00:22:16.683 12:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:16.683 12:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.683 ************************************ 00:22:16.683 END TEST raid_superblock_test 00:22:16.683 ************************************ 00:22:16.683 12:53:59 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:22:16.683 12:53:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:16.683 12:53:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:16.683 12:53:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:16.683 ************************************ 00:22:16.683 START TEST raid_read_error_test 00:22:16.683 ************************************ 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.sKZaXg5X6R 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70818 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70818 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 70818 ']' 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:16.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:16.683 12:53:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.944 [2024-12-05 12:53:59.306404] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:22:16.944 [2024-12-05 12:53:59.306591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70818 ] 00:22:16.945 [2024-12-05 12:53:59.477435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.204 [2024-12-05 12:53:59.564678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.204 [2024-12-05 12:53:59.677724] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:17.204 [2024-12-05 12:53:59.677775] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:17.783 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:17.783 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:22:17.783 12:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:17.783 12:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:17.783 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.783 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.783 BaseBdev1_malloc 00:22:17.783 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.783 12:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:22:17.783 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.783 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.783 true 00:22:17.783 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.783 12:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:17.783 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.783 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.783 [2024-12-05 12:54:00.207136] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:17.783 [2024-12-05 12:54:00.207184] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.783 [2024-12-05 12:54:00.207201] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:22:17.783 [2024-12-05 12:54:00.207210] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.783 [2024-12-05 12:54:00.209001] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.783 [2024-12-05 12:54:00.209036] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:17.783 BaseBdev1 00:22:17.783 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.783 12:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:17.783 12:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:17.783 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.783 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.783 BaseBdev2_malloc 00:22:17.783 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.783 12:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:22:17.783 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.783 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.783 true 00:22:17.783 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.783 12:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:17.783 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.783 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.783 [2024-12-05 12:54:00.247150] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:17.783 [2024-12-05 12:54:00.247194] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.783 [2024-12-05 12:54:00.247209] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:22:17.783 [2024-12-05 12:54:00.247217] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.783 [2024-12-05 12:54:00.249012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.783 [2024-12-05 12:54:00.249047] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:17.783 BaseBdev2 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.784 BaseBdev3_malloc 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.784 true 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.784 [2024-12-05 12:54:00.301920] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:17.784 [2024-12-05 12:54:00.301964] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.784 [2024-12-05 12:54:00.301978] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:22:17.784 [2024-12-05 12:54:00.301987] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.784 [2024-12-05 12:54:00.303720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.784 [2024-12-05 12:54:00.303751] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:17.784 BaseBdev3 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.784 BaseBdev4_malloc 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.784 true 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.784 [2024-12-05 12:54:00.341658] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:22:17.784 [2024-12-05 12:54:00.341697] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.784 [2024-12-05 12:54:00.341710] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:17.784 [2024-12-05 12:54:00.341719] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.784 [2024-12-05 12:54:00.343430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.784 [2024-12-05 12:54:00.343461] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:17.784 BaseBdev4 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.784 [2024-12-05 12:54:00.349717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:17.784 [2024-12-05 12:54:00.351265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:17.784 [2024-12-05 12:54:00.351332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:17.784 [2024-12-05 12:54:00.351387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:17.784 [2024-12-05 12:54:00.351577] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:22:17.784 [2024-12-05 12:54:00.351589] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:17.784 [2024-12-05 12:54:00.351800] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:22:17.784 [2024-12-05 12:54:00.351930] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:22:17.784 [2024-12-05 12:54:00.351975] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:22:17.784 [2024-12-05 12:54:00.352093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.784 12:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.045 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.045 12:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:18.045 "name": "raid_bdev1", 00:22:18.045 "uuid": "b1278fcf-4846-4f29-abe4-8485588b74c7", 00:22:18.045 "strip_size_kb": 64, 00:22:18.045 "state": "online", 00:22:18.045 "raid_level": "concat", 00:22:18.045 "superblock": true, 00:22:18.045 "num_base_bdevs": 4, 00:22:18.045 "num_base_bdevs_discovered": 4, 00:22:18.045 "num_base_bdevs_operational": 4, 00:22:18.045 "base_bdevs_list": [ 00:22:18.045 { 00:22:18.045 "name": "BaseBdev1", 00:22:18.045 "uuid": "00472e7c-1e64-5c2a-b50e-604d87bbe932", 00:22:18.045 "is_configured": true, 00:22:18.045 "data_offset": 2048, 00:22:18.045 "data_size": 63488 00:22:18.045 }, 00:22:18.045 { 00:22:18.045 "name": "BaseBdev2", 00:22:18.045 "uuid": "c85dea17-e39b-51e8-9ac8-947fd8399f72", 00:22:18.045 "is_configured": true, 00:22:18.045 "data_offset": 2048, 00:22:18.045 "data_size": 63488 00:22:18.045 }, 00:22:18.045 { 00:22:18.045 "name": "BaseBdev3", 00:22:18.045 "uuid": "5adc79d4-6369-5f37-91e7-030a16ffc7a2", 00:22:18.045 "is_configured": true, 00:22:18.045 "data_offset": 2048, 00:22:18.045 "data_size": 63488 00:22:18.045 }, 00:22:18.045 { 00:22:18.045 "name": "BaseBdev4", 00:22:18.045 "uuid": "c89e1de6-b8fe-592a-9f9c-80d75ad0f83d", 00:22:18.045 "is_configured": true, 00:22:18.045 "data_offset": 2048, 00:22:18.045 "data_size": 63488 00:22:18.045 } 00:22:18.045 ] 00:22:18.045 }' 00:22:18.046 12:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:18.046 12:54:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.307 12:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:22:18.307 12:54:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:22:18.307 [2024-12-05 12:54:00.750611] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:22:19.256 12:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:22:19.256 12:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.256 12:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.256 12:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.256 12:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:22:19.256 12:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:22:19.256 12:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:22:19.256 12:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:22:19.256 12:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:19.256 12:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:19.256 12:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:19.256 12:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:19.256 12:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:19.256 12:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:19.256 12:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:19.256 12:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:19.256 12:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:19.256 12:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.256 12:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.256 12:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.257 12:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.257 12:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.257 12:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:19.257 "name": "raid_bdev1", 00:22:19.257 "uuid": "b1278fcf-4846-4f29-abe4-8485588b74c7", 00:22:19.257 "strip_size_kb": 64, 00:22:19.257 "state": "online", 00:22:19.257 "raid_level": "concat", 00:22:19.257 "superblock": true, 00:22:19.257 "num_base_bdevs": 4, 00:22:19.257 "num_base_bdevs_discovered": 4, 00:22:19.257 "num_base_bdevs_operational": 4, 00:22:19.257 "base_bdevs_list": [ 00:22:19.257 { 00:22:19.257 "name": "BaseBdev1", 00:22:19.257 "uuid": "00472e7c-1e64-5c2a-b50e-604d87bbe932", 00:22:19.257 "is_configured": true, 00:22:19.257 "data_offset": 2048, 00:22:19.257 "data_size": 63488 00:22:19.257 }, 00:22:19.257 { 00:22:19.257 "name": "BaseBdev2", 00:22:19.257 "uuid": "c85dea17-e39b-51e8-9ac8-947fd8399f72", 00:22:19.257 "is_configured": true, 00:22:19.257 "data_offset": 2048, 00:22:19.257 "data_size": 63488 00:22:19.257 }, 00:22:19.257 { 00:22:19.257 "name": "BaseBdev3", 00:22:19.257 "uuid": "5adc79d4-6369-5f37-91e7-030a16ffc7a2", 00:22:19.257 "is_configured": true, 00:22:19.257 "data_offset": 2048, 00:22:19.257 "data_size": 63488 00:22:19.257 }, 00:22:19.257 { 00:22:19.257 "name": "BaseBdev4", 00:22:19.257 "uuid": "c89e1de6-b8fe-592a-9f9c-80d75ad0f83d", 00:22:19.257 "is_configured": true, 00:22:19.257 "data_offset": 2048, 00:22:19.257 "data_size": 63488 00:22:19.257 } 00:22:19.257 ] 00:22:19.257 }' 00:22:19.257 12:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:19.257 12:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.518 12:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:19.518 12:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.518 12:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.518 [2024-12-05 12:54:01.975497] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:19.518 [2024-12-05 12:54:01.975526] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:19.518 [2024-12-05 12:54:01.977982] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:19.518 [2024-12-05 12:54:01.978040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:19.518 [2024-12-05 12:54:01.978078] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:19.518 [2024-12-05 12:54:01.978087] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:22:19.518 { 00:22:19.518 "results": [ 00:22:19.518 { 00:22:19.518 "job": "raid_bdev1", 00:22:19.518 "core_mask": "0x1", 00:22:19.518 "workload": "randrw", 00:22:19.518 "percentage": 50, 00:22:19.518 "status": "finished", 00:22:19.518 "queue_depth": 1, 00:22:19.518 "io_size": 131072, 00:22:19.518 "runtime": 1.223329, 00:22:19.518 "iops": 16921.04086472241, 00:22:19.518 "mibps": 2115.1301080903013, 00:22:19.518 "io_failed": 1, 00:22:19.518 "io_timeout": 0, 00:22:19.518 "avg_latency_us": 80.84897095272247, 00:22:19.518 "min_latency_us": 27.766153846153845, 00:22:19.518 "max_latency_us": 1380.0369230769231 00:22:19.518 } 00:22:19.518 ], 00:22:19.518 "core_count": 1 00:22:19.518 } 00:22:19.518 12:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.518 12:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70818 00:22:19.518 12:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 70818 ']' 00:22:19.518 12:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 70818 00:22:19.518 12:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:22:19.518 12:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:19.518 12:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70818 00:22:19.518 12:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:19.518 12:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:19.518 killing process with pid 70818 00:22:19.518 12:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70818' 00:22:19.518 12:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 70818 00:22:19.518 [2024-12-05 12:54:01.998960] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:19.518 12:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 70818 00:22:19.778 [2024-12-05 12:54:02.159156] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:20.348 12:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.sKZaXg5X6R 00:22:20.348 12:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:22:20.348 12:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:22:20.348 12:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.82 00:22:20.348 12:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:22:20.348 12:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:20.348 12:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:22:20.348 12:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.82 != \0\.\0\0 ]] 00:22:20.348 00:22:20.348 real 0m3.558s 00:22:20.348 user 0m4.225s 00:22:20.348 sys 0m0.439s 00:22:20.348 12:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:20.348 12:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.348 ************************************ 00:22:20.348 END TEST raid_read_error_test 00:22:20.348 ************************************ 00:22:20.348 12:54:02 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:22:20.348 12:54:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:20.348 12:54:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:20.348 12:54:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:20.348 ************************************ 00:22:20.348 START TEST raid_write_error_test 00:22:20.348 ************************************ 00:22:20.348 12:54:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:22:20.348 12:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:22:20.348 12:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:22:20.348 12:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:22:20.348 12:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:22:20.348 12:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:20.348 12:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:22:20.348 12:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:20.348 12:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:20.348 12:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:22:20.348 12:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:20.348 12:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:20.348 12:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:22:20.348 12:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:20.348 12:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:20.348 12:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:22:20.348 12:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:20.348 12:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:20.348 12:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:20.348 12:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:22:20.348 12:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:22:20.348 12:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:22:20.348 12:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:22:20.348 12:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:22:20.348 12:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:22:20.348 12:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:22:20.348 12:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:22:20.348 12:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:22:20.348 12:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:22:20.348 12:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ozwGDZfaW2 00:22:20.348 12:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70947 00:22:20.348 12:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70947 00:22:20.348 12:54:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:20.348 12:54:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 70947 ']' 00:22:20.348 12:54:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:20.349 12:54:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:20.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:20.349 12:54:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:20.349 12:54:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:20.349 12:54:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.349 [2024-12-05 12:54:02.875842] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:22:20.349 [2024-12-05 12:54:02.875971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70947 ] 00:22:20.608 [2024-12-05 12:54:03.035163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.608 [2024-12-05 12:54:03.140341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.869 [2024-12-05 12:54:03.280767] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:20.869 [2024-12-05 12:54:03.280807] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:21.440 12:54:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:21.440 12:54:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:22:21.440 12:54:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:21.440 12:54:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:21.440 12:54:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.440 12:54:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.440 BaseBdev1_malloc 00:22:21.440 12:54:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.440 12:54:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:22:21.440 12:54:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.440 12:54:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.440 true 00:22:21.440 12:54:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.440 12:54:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:21.440 12:54:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.440 12:54:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.440 [2024-12-05 12:54:03.866192] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:21.440 [2024-12-05 12:54:03.866246] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:21.440 [2024-12-05 12:54:03.866266] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:22:21.440 [2024-12-05 12:54:03.866277] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:21.440 [2024-12-05 12:54:03.868427] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:21.440 [2024-12-05 12:54:03.868464] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:21.440 BaseBdev1 00:22:21.440 12:54:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.440 12:54:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:21.440 12:54:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:21.440 12:54:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.440 12:54:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.440 BaseBdev2_malloc 00:22:21.440 12:54:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.440 12:54:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:22:21.440 12:54:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.440 12:54:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.440 true 00:22:21.440 12:54:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.440 12:54:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:21.440 12:54:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.440 12:54:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.440 [2024-12-05 12:54:03.910778] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:21.440 [2024-12-05 12:54:03.910834] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:21.440 [2024-12-05 12:54:03.910852] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:22:21.440 [2024-12-05 12:54:03.910862] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:21.440 [2024-12-05 12:54:03.913751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:21.440 [2024-12-05 12:54:03.913802] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:21.440 BaseBdev2 00:22:21.440 12:54:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.440 12:54:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:21.440 12:54:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:21.440 12:54:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.440 12:54:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.441 BaseBdev3_malloc 00:22:21.441 12:54:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.441 12:54:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:22:21.441 12:54:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.441 12:54:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.441 true 00:22:21.441 12:54:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.441 12:54:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:21.441 12:54:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.441 12:54:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.441 [2024-12-05 12:54:03.973194] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:21.441 [2024-12-05 12:54:03.973247] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:21.441 [2024-12-05 12:54:03.973265] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:22:21.441 [2024-12-05 12:54:03.973275] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:21.441 [2024-12-05 12:54:03.975436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:21.441 [2024-12-05 12:54:03.975476] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:21.441 BaseBdev3 00:22:21.441 12:54:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.441 12:54:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:21.441 12:54:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:21.441 12:54:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.441 12:54:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.441 BaseBdev4_malloc 00:22:21.441 12:54:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.441 12:54:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:22:21.441 12:54:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.441 12:54:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.441 true 00:22:21.441 12:54:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.441 12:54:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:22:21.441 12:54:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.441 12:54:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.441 [2024-12-05 12:54:04.017412] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:22:21.441 [2024-12-05 12:54:04.017465] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:21.441 [2024-12-05 12:54:04.017484] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:21.441 [2024-12-05 12:54:04.017504] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:21.441 [2024-12-05 12:54:04.019678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:21.441 [2024-12-05 12:54:04.019715] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:21.441 BaseBdev4 00:22:21.441 12:54:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.441 12:54:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:22:21.441 12:54:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.441 12:54:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.703 [2024-12-05 12:54:04.025481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:21.703 [2024-12-05 12:54:04.027322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:21.703 [2024-12-05 12:54:04.027403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:21.703 [2024-12-05 12:54:04.027468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:21.703 [2024-12-05 12:54:04.027708] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:22:21.703 [2024-12-05 12:54:04.027729] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:21.703 [2024-12-05 12:54:04.028028] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:22:21.703 [2024-12-05 12:54:04.028188] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:22:21.703 [2024-12-05 12:54:04.028206] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:22:21.703 [2024-12-05 12:54:04.028352] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:21.703 12:54:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.703 12:54:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:22:21.703 12:54:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:21.703 12:54:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:21.703 12:54:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:21.703 12:54:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:21.703 12:54:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:21.703 12:54:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:21.703 12:54:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:21.703 12:54:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:21.703 12:54:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:21.703 12:54:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:21.703 12:54:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.703 12:54:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:21.703 12:54:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.703 12:54:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.703 12:54:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:21.703 "name": "raid_bdev1", 00:22:21.703 "uuid": "136ab24f-dbfc-419e-837a-a209615de250", 00:22:21.703 "strip_size_kb": 64, 00:22:21.703 "state": "online", 00:22:21.703 "raid_level": "concat", 00:22:21.703 "superblock": true, 00:22:21.703 "num_base_bdevs": 4, 00:22:21.703 "num_base_bdevs_discovered": 4, 00:22:21.703 "num_base_bdevs_operational": 4, 00:22:21.703 "base_bdevs_list": [ 00:22:21.703 { 00:22:21.703 "name": "BaseBdev1", 00:22:21.703 "uuid": "53d7f4d7-e39b-5be8-b9aa-46add2bbeede", 00:22:21.703 "is_configured": true, 00:22:21.703 "data_offset": 2048, 00:22:21.703 "data_size": 63488 00:22:21.703 }, 00:22:21.703 { 00:22:21.703 "name": "BaseBdev2", 00:22:21.703 "uuid": "88c39648-ef3f-5477-9981-e700aaf62e34", 00:22:21.703 "is_configured": true, 00:22:21.703 "data_offset": 2048, 00:22:21.703 "data_size": 63488 00:22:21.703 }, 00:22:21.703 { 00:22:21.703 "name": "BaseBdev3", 00:22:21.703 "uuid": "a0e7a553-5f6c-56d4-bfad-b0bb2e59538d", 00:22:21.703 "is_configured": true, 00:22:21.703 "data_offset": 2048, 00:22:21.703 "data_size": 63488 00:22:21.703 }, 00:22:21.703 { 00:22:21.703 "name": "BaseBdev4", 00:22:21.703 "uuid": "51071aec-2721-5d3e-a7fa-8f1fb167d8db", 00:22:21.703 "is_configured": true, 00:22:21.703 "data_offset": 2048, 00:22:21.703 "data_size": 63488 00:22:21.703 } 00:22:21.703 ] 00:22:21.703 }' 00:22:21.703 12:54:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:21.703 12:54:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.964 12:54:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:22:21.964 12:54:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:22:21.964 [2024-12-05 12:54:04.398498] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:22:22.902 12:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:22:22.902 12:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.902 12:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.902 12:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.902 12:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:22:22.902 12:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:22:22.902 12:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:22:22.902 12:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:22:22.903 12:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:22.903 12:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:22.903 12:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:22.903 12:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:22.903 12:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:22.903 12:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:22.903 12:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:22.903 12:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:22.903 12:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:22.903 12:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:22.903 12:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.903 12:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.903 12:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.903 12:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.903 12:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:22.903 "name": "raid_bdev1", 00:22:22.903 "uuid": "136ab24f-dbfc-419e-837a-a209615de250", 00:22:22.903 "strip_size_kb": 64, 00:22:22.903 "state": "online", 00:22:22.903 "raid_level": "concat", 00:22:22.903 "superblock": true, 00:22:22.903 "num_base_bdevs": 4, 00:22:22.903 "num_base_bdevs_discovered": 4, 00:22:22.903 "num_base_bdevs_operational": 4, 00:22:22.903 "base_bdevs_list": [ 00:22:22.903 { 00:22:22.903 "name": "BaseBdev1", 00:22:22.903 "uuid": "53d7f4d7-e39b-5be8-b9aa-46add2bbeede", 00:22:22.903 "is_configured": true, 00:22:22.903 "data_offset": 2048, 00:22:22.903 "data_size": 63488 00:22:22.903 }, 00:22:22.903 { 00:22:22.903 "name": "BaseBdev2", 00:22:22.903 "uuid": "88c39648-ef3f-5477-9981-e700aaf62e34", 00:22:22.903 "is_configured": true, 00:22:22.903 "data_offset": 2048, 00:22:22.903 "data_size": 63488 00:22:22.903 }, 00:22:22.903 { 00:22:22.903 "name": "BaseBdev3", 00:22:22.903 "uuid": "a0e7a553-5f6c-56d4-bfad-b0bb2e59538d", 00:22:22.903 "is_configured": true, 00:22:22.903 "data_offset": 2048, 00:22:22.903 "data_size": 63488 00:22:22.903 }, 00:22:22.903 { 00:22:22.903 "name": "BaseBdev4", 00:22:22.903 "uuid": "51071aec-2721-5d3e-a7fa-8f1fb167d8db", 00:22:22.903 "is_configured": true, 00:22:22.903 "data_offset": 2048, 00:22:22.903 "data_size": 63488 00:22:22.903 } 00:22:22.903 ] 00:22:22.903 }' 00:22:22.903 12:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:22.903 12:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.216 12:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:23.216 12:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.216 12:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.216 [2024-12-05 12:54:05.664147] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:23.216 [2024-12-05 12:54:05.664182] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:23.216 [2024-12-05 12:54:05.667234] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:23.216 [2024-12-05 12:54:05.667299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:23.216 [2024-12-05 12:54:05.667342] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:23.216 [2024-12-05 12:54:05.667354] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:22:23.216 12:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.216 { 00:22:23.216 "results": [ 00:22:23.216 { 00:22:23.216 "job": "raid_bdev1", 00:22:23.216 "core_mask": "0x1", 00:22:23.216 "workload": "randrw", 00:22:23.216 "percentage": 50, 00:22:23.216 "status": "finished", 00:22:23.216 "queue_depth": 1, 00:22:23.216 "io_size": 131072, 00:22:23.216 "runtime": 1.263835, 00:22:23.216 "iops": 14182.231066555365, 00:22:23.216 "mibps": 1772.7788833194206, 00:22:23.216 "io_failed": 1, 00:22:23.216 "io_timeout": 0, 00:22:23.216 "avg_latency_us": 96.09389687801738, 00:22:23.216 "min_latency_us": 33.870769230769234, 00:22:23.216 "max_latency_us": 1688.8123076923077 00:22:23.216 } 00:22:23.216 ], 00:22:23.216 "core_count": 1 00:22:23.216 } 00:22:23.216 12:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70947 00:22:23.216 12:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 70947 ']' 00:22:23.216 12:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 70947 00:22:23.216 12:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:22:23.216 12:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:23.216 12:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70947 00:22:23.216 12:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:23.216 12:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:23.216 12:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70947' 00:22:23.216 killing process with pid 70947 00:22:23.216 12:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 70947 00:22:23.216 [2024-12-05 12:54:05.693463] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:23.216 12:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 70947 00:22:23.475 [2024-12-05 12:54:05.893480] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:24.409 12:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ozwGDZfaW2 00:22:24.409 12:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:22:24.409 12:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:22:24.409 12:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.79 00:22:24.409 12:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:22:24.409 12:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:24.409 12:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:22:24.409 12:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.79 != \0\.\0\0 ]] 00:22:24.409 00:22:24.409 real 0m3.850s 00:22:24.409 user 0m4.591s 00:22:24.409 sys 0m0.399s 00:22:24.409 ************************************ 00:22:24.409 END TEST raid_write_error_test 00:22:24.409 ************************************ 00:22:24.409 12:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:24.409 12:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.409 12:54:06 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:22:24.409 12:54:06 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:22:24.409 12:54:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:24.409 12:54:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:24.409 12:54:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:24.409 ************************************ 00:22:24.409 START TEST raid_state_function_test 00:22:24.409 ************************************ 00:22:24.409 12:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:22:24.409 12:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:22:24.409 12:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:22:24.409 12:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:22:24.409 12:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:24.409 12:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:24.409 12:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:24.409 12:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:24.409 12:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:24.409 12:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:24.410 12:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:24.410 12:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:24.410 12:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:24.410 12:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:22:24.410 12:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:24.410 12:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:24.410 12:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:22:24.410 12:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:24.410 12:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:24.410 12:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:24.410 12:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:24.410 12:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:24.410 12:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:24.410 12:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:24.410 12:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:24.410 12:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:22:24.410 12:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:22:24.410 12:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:22:24.410 12:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:22:24.410 12:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71085 00:22:24.410 Process raid pid: 71085 00:22:24.410 12:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71085' 00:22:24.410 12:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71085 00:22:24.410 12:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71085 ']' 00:22:24.410 12:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.410 12:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:24.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:24.410 12:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.410 12:54:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:24.410 12:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:24.410 12:54:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.410 [2024-12-05 12:54:06.783349] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:22:24.410 [2024-12-05 12:54:06.783485] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:24.410 [2024-12-05 12:54:06.936627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.669 [2024-12-05 12:54:07.038240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.669 [2024-12-05 12:54:07.176408] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:24.669 [2024-12-05 12:54:07.176452] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:25.240 12:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:25.240 12:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:22:25.240 12:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:25.240 12:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.240 12:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.240 [2024-12-05 12:54:07.636462] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:25.240 [2024-12-05 12:54:07.636525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:25.240 [2024-12-05 12:54:07.636536] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:25.240 [2024-12-05 12:54:07.636546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:25.240 [2024-12-05 12:54:07.636552] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:25.240 [2024-12-05 12:54:07.636560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:25.240 [2024-12-05 12:54:07.636567] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:25.240 [2024-12-05 12:54:07.636575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:25.240 12:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.240 12:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:25.240 12:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:25.240 12:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:25.240 12:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:25.240 12:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:25.240 12:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:25.240 12:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:25.240 12:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:25.240 12:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:25.240 12:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:25.240 12:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:25.240 12:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:25.240 12:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.240 12:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.240 12:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.240 12:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:25.240 "name": "Existed_Raid", 00:22:25.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.240 "strip_size_kb": 0, 00:22:25.240 "state": "configuring", 00:22:25.240 "raid_level": "raid1", 00:22:25.240 "superblock": false, 00:22:25.240 "num_base_bdevs": 4, 00:22:25.240 "num_base_bdevs_discovered": 0, 00:22:25.240 "num_base_bdevs_operational": 4, 00:22:25.240 "base_bdevs_list": [ 00:22:25.240 { 00:22:25.240 "name": "BaseBdev1", 00:22:25.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.240 "is_configured": false, 00:22:25.240 "data_offset": 0, 00:22:25.240 "data_size": 0 00:22:25.240 }, 00:22:25.240 { 00:22:25.240 "name": "BaseBdev2", 00:22:25.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.240 "is_configured": false, 00:22:25.240 "data_offset": 0, 00:22:25.240 "data_size": 0 00:22:25.240 }, 00:22:25.240 { 00:22:25.240 "name": "BaseBdev3", 00:22:25.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.240 "is_configured": false, 00:22:25.240 "data_offset": 0, 00:22:25.240 "data_size": 0 00:22:25.240 }, 00:22:25.240 { 00:22:25.240 "name": "BaseBdev4", 00:22:25.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.240 "is_configured": false, 00:22:25.240 "data_offset": 0, 00:22:25.240 "data_size": 0 00:22:25.240 } 00:22:25.240 ] 00:22:25.240 }' 00:22:25.240 12:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:25.240 12:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.501 12:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:25.501 12:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.501 12:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.501 [2024-12-05 12:54:07.948482] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:25.501 [2024-12-05 12:54:07.948531] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:25.501 12:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.501 12:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:25.501 12:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.501 12:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.501 [2024-12-05 12:54:07.956507] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:25.501 [2024-12-05 12:54:07.956545] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:25.501 [2024-12-05 12:54:07.956553] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:25.501 [2024-12-05 12:54:07.956563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:25.501 [2024-12-05 12:54:07.956569] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:25.501 [2024-12-05 12:54:07.956578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:25.501 [2024-12-05 12:54:07.956584] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:25.501 [2024-12-05 12:54:07.956592] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:25.501 12:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.501 12:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:25.501 12:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.501 12:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.501 [2024-12-05 12:54:07.989072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:25.501 BaseBdev1 00:22:25.501 12:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.501 12:54:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:25.501 12:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:25.501 12:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:25.501 12:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:25.501 12:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:25.501 12:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:25.501 12:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:25.501 12:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.501 12:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.501 12:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.501 12:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:25.501 12:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.501 12:54:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.501 [ 00:22:25.501 { 00:22:25.501 "name": "BaseBdev1", 00:22:25.501 "aliases": [ 00:22:25.501 "fbfbb6ed-5a66-4495-ad0c-a2e8ee05b161" 00:22:25.501 ], 00:22:25.501 "product_name": "Malloc disk", 00:22:25.501 "block_size": 512, 00:22:25.501 "num_blocks": 65536, 00:22:25.501 "uuid": "fbfbb6ed-5a66-4495-ad0c-a2e8ee05b161", 00:22:25.501 "assigned_rate_limits": { 00:22:25.501 "rw_ios_per_sec": 0, 00:22:25.501 "rw_mbytes_per_sec": 0, 00:22:25.501 "r_mbytes_per_sec": 0, 00:22:25.501 "w_mbytes_per_sec": 0 00:22:25.501 }, 00:22:25.501 "claimed": true, 00:22:25.501 "claim_type": "exclusive_write", 00:22:25.501 "zoned": false, 00:22:25.501 "supported_io_types": { 00:22:25.501 "read": true, 00:22:25.501 "write": true, 00:22:25.501 "unmap": true, 00:22:25.501 "flush": true, 00:22:25.501 "reset": true, 00:22:25.501 "nvme_admin": false, 00:22:25.501 "nvme_io": false, 00:22:25.501 "nvme_io_md": false, 00:22:25.501 "write_zeroes": true, 00:22:25.501 "zcopy": true, 00:22:25.501 "get_zone_info": false, 00:22:25.501 "zone_management": false, 00:22:25.501 "zone_append": false, 00:22:25.501 "compare": false, 00:22:25.501 "compare_and_write": false, 00:22:25.501 "abort": true, 00:22:25.501 "seek_hole": false, 00:22:25.501 "seek_data": false, 00:22:25.501 "copy": true, 00:22:25.501 "nvme_iov_md": false 00:22:25.501 }, 00:22:25.501 "memory_domains": [ 00:22:25.501 { 00:22:25.501 "dma_device_id": "system", 00:22:25.501 "dma_device_type": 1 00:22:25.501 }, 00:22:25.501 { 00:22:25.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:25.501 "dma_device_type": 2 00:22:25.501 } 00:22:25.501 ], 00:22:25.501 "driver_specific": {} 00:22:25.501 } 00:22:25.501 ] 00:22:25.501 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.501 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:25.501 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:25.501 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:25.501 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:25.501 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:25.501 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:25.501 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:25.501 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:25.501 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:25.501 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:25.501 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:25.501 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:25.501 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:25.501 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.501 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.501 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.501 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:25.501 "name": "Existed_Raid", 00:22:25.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.501 "strip_size_kb": 0, 00:22:25.501 "state": "configuring", 00:22:25.501 "raid_level": "raid1", 00:22:25.501 "superblock": false, 00:22:25.502 "num_base_bdevs": 4, 00:22:25.502 "num_base_bdevs_discovered": 1, 00:22:25.502 "num_base_bdevs_operational": 4, 00:22:25.502 "base_bdevs_list": [ 00:22:25.502 { 00:22:25.502 "name": "BaseBdev1", 00:22:25.502 "uuid": "fbfbb6ed-5a66-4495-ad0c-a2e8ee05b161", 00:22:25.502 "is_configured": true, 00:22:25.502 "data_offset": 0, 00:22:25.502 "data_size": 65536 00:22:25.502 }, 00:22:25.502 { 00:22:25.502 "name": "BaseBdev2", 00:22:25.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.502 "is_configured": false, 00:22:25.502 "data_offset": 0, 00:22:25.502 "data_size": 0 00:22:25.502 }, 00:22:25.502 { 00:22:25.502 "name": "BaseBdev3", 00:22:25.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.502 "is_configured": false, 00:22:25.502 "data_offset": 0, 00:22:25.502 "data_size": 0 00:22:25.502 }, 00:22:25.502 { 00:22:25.502 "name": "BaseBdev4", 00:22:25.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.502 "is_configured": false, 00:22:25.502 "data_offset": 0, 00:22:25.502 "data_size": 0 00:22:25.502 } 00:22:25.502 ] 00:22:25.502 }' 00:22:25.502 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:25.502 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.762 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:25.762 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.762 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.762 [2024-12-05 12:54:08.297176] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:25.762 [2024-12-05 12:54:08.297223] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:25.762 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.762 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:25.762 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.762 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.762 [2024-12-05 12:54:08.309234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:25.762 [2024-12-05 12:54:08.311128] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:25.762 [2024-12-05 12:54:08.311170] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:25.762 [2024-12-05 12:54:08.311180] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:25.762 [2024-12-05 12:54:08.311193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:25.762 [2024-12-05 12:54:08.311200] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:25.762 [2024-12-05 12:54:08.311209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:25.762 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.762 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:25.762 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:25.762 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:25.762 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:25.762 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:25.762 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:25.762 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:25.762 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:25.762 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:25.762 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:25.762 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:25.762 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:25.762 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:25.762 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.762 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:25.762 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.762 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.021 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:26.021 "name": "Existed_Raid", 00:22:26.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.021 "strip_size_kb": 0, 00:22:26.021 "state": "configuring", 00:22:26.021 "raid_level": "raid1", 00:22:26.021 "superblock": false, 00:22:26.021 "num_base_bdevs": 4, 00:22:26.021 "num_base_bdevs_discovered": 1, 00:22:26.021 "num_base_bdevs_operational": 4, 00:22:26.021 "base_bdevs_list": [ 00:22:26.021 { 00:22:26.021 "name": "BaseBdev1", 00:22:26.021 "uuid": "fbfbb6ed-5a66-4495-ad0c-a2e8ee05b161", 00:22:26.021 "is_configured": true, 00:22:26.021 "data_offset": 0, 00:22:26.021 "data_size": 65536 00:22:26.021 }, 00:22:26.021 { 00:22:26.021 "name": "BaseBdev2", 00:22:26.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.021 "is_configured": false, 00:22:26.021 "data_offset": 0, 00:22:26.021 "data_size": 0 00:22:26.021 }, 00:22:26.021 { 00:22:26.021 "name": "BaseBdev3", 00:22:26.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.021 "is_configured": false, 00:22:26.021 "data_offset": 0, 00:22:26.021 "data_size": 0 00:22:26.021 }, 00:22:26.021 { 00:22:26.021 "name": "BaseBdev4", 00:22:26.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.021 "is_configured": false, 00:22:26.021 "data_offset": 0, 00:22:26.021 "data_size": 0 00:22:26.021 } 00:22:26.021 ] 00:22:26.021 }' 00:22:26.021 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:26.021 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.279 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:26.279 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.279 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.279 [2024-12-05 12:54:08.643810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:26.279 BaseBdev2 00:22:26.279 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.279 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:26.279 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:26.279 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:26.279 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:26.279 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:26.279 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:26.279 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:26.279 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.279 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.279 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.279 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:26.279 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.279 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.279 [ 00:22:26.279 { 00:22:26.279 "name": "BaseBdev2", 00:22:26.279 "aliases": [ 00:22:26.279 "1e8462f3-76c3-47fe-9fdc-78e578e5adb5" 00:22:26.279 ], 00:22:26.279 "product_name": "Malloc disk", 00:22:26.279 "block_size": 512, 00:22:26.279 "num_blocks": 65536, 00:22:26.279 "uuid": "1e8462f3-76c3-47fe-9fdc-78e578e5adb5", 00:22:26.279 "assigned_rate_limits": { 00:22:26.279 "rw_ios_per_sec": 0, 00:22:26.279 "rw_mbytes_per_sec": 0, 00:22:26.279 "r_mbytes_per_sec": 0, 00:22:26.279 "w_mbytes_per_sec": 0 00:22:26.279 }, 00:22:26.279 "claimed": true, 00:22:26.279 "claim_type": "exclusive_write", 00:22:26.279 "zoned": false, 00:22:26.279 "supported_io_types": { 00:22:26.279 "read": true, 00:22:26.279 "write": true, 00:22:26.279 "unmap": true, 00:22:26.279 "flush": true, 00:22:26.279 "reset": true, 00:22:26.279 "nvme_admin": false, 00:22:26.279 "nvme_io": false, 00:22:26.279 "nvme_io_md": false, 00:22:26.279 "write_zeroes": true, 00:22:26.279 "zcopy": true, 00:22:26.279 "get_zone_info": false, 00:22:26.279 "zone_management": false, 00:22:26.279 "zone_append": false, 00:22:26.279 "compare": false, 00:22:26.279 "compare_and_write": false, 00:22:26.279 "abort": true, 00:22:26.279 "seek_hole": false, 00:22:26.279 "seek_data": false, 00:22:26.279 "copy": true, 00:22:26.279 "nvme_iov_md": false 00:22:26.279 }, 00:22:26.279 "memory_domains": [ 00:22:26.279 { 00:22:26.279 "dma_device_id": "system", 00:22:26.279 "dma_device_type": 1 00:22:26.279 }, 00:22:26.279 { 00:22:26.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.279 "dma_device_type": 2 00:22:26.279 } 00:22:26.279 ], 00:22:26.279 "driver_specific": {} 00:22:26.279 } 00:22:26.279 ] 00:22:26.279 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.279 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:26.280 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:26.280 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:26.280 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:26.280 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:26.280 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:26.280 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:26.280 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:26.280 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:26.280 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:26.280 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:26.280 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:26.280 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:26.280 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.280 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:26.280 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.280 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.280 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.280 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:26.280 "name": "Existed_Raid", 00:22:26.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.280 "strip_size_kb": 0, 00:22:26.280 "state": "configuring", 00:22:26.280 "raid_level": "raid1", 00:22:26.280 "superblock": false, 00:22:26.280 "num_base_bdevs": 4, 00:22:26.280 "num_base_bdevs_discovered": 2, 00:22:26.280 "num_base_bdevs_operational": 4, 00:22:26.280 "base_bdevs_list": [ 00:22:26.280 { 00:22:26.280 "name": "BaseBdev1", 00:22:26.280 "uuid": "fbfbb6ed-5a66-4495-ad0c-a2e8ee05b161", 00:22:26.280 "is_configured": true, 00:22:26.280 "data_offset": 0, 00:22:26.280 "data_size": 65536 00:22:26.280 }, 00:22:26.280 { 00:22:26.280 "name": "BaseBdev2", 00:22:26.280 "uuid": "1e8462f3-76c3-47fe-9fdc-78e578e5adb5", 00:22:26.280 "is_configured": true, 00:22:26.280 "data_offset": 0, 00:22:26.280 "data_size": 65536 00:22:26.280 }, 00:22:26.280 { 00:22:26.280 "name": "BaseBdev3", 00:22:26.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.280 "is_configured": false, 00:22:26.280 "data_offset": 0, 00:22:26.280 "data_size": 0 00:22:26.280 }, 00:22:26.280 { 00:22:26.280 "name": "BaseBdev4", 00:22:26.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.280 "is_configured": false, 00:22:26.280 "data_offset": 0, 00:22:26.280 "data_size": 0 00:22:26.280 } 00:22:26.280 ] 00:22:26.280 }' 00:22:26.280 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:26.280 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.539 12:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:26.539 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.539 12:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.539 [2024-12-05 12:54:09.021816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:26.539 BaseBdev3 00:22:26.539 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.539 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:22:26.539 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:22:26.539 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:26.539 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:26.539 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:26.539 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:26.539 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:26.539 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.539 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.539 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.539 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:26.539 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.539 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.539 [ 00:22:26.539 { 00:22:26.539 "name": "BaseBdev3", 00:22:26.539 "aliases": [ 00:22:26.539 "a1d2eedd-eee5-43dc-8640-1136c833248e" 00:22:26.539 ], 00:22:26.539 "product_name": "Malloc disk", 00:22:26.539 "block_size": 512, 00:22:26.539 "num_blocks": 65536, 00:22:26.539 "uuid": "a1d2eedd-eee5-43dc-8640-1136c833248e", 00:22:26.539 "assigned_rate_limits": { 00:22:26.539 "rw_ios_per_sec": 0, 00:22:26.539 "rw_mbytes_per_sec": 0, 00:22:26.539 "r_mbytes_per_sec": 0, 00:22:26.539 "w_mbytes_per_sec": 0 00:22:26.539 }, 00:22:26.539 "claimed": true, 00:22:26.539 "claim_type": "exclusive_write", 00:22:26.539 "zoned": false, 00:22:26.540 "supported_io_types": { 00:22:26.540 "read": true, 00:22:26.540 "write": true, 00:22:26.540 "unmap": true, 00:22:26.540 "flush": true, 00:22:26.540 "reset": true, 00:22:26.540 "nvme_admin": false, 00:22:26.540 "nvme_io": false, 00:22:26.540 "nvme_io_md": false, 00:22:26.540 "write_zeroes": true, 00:22:26.540 "zcopy": true, 00:22:26.540 "get_zone_info": false, 00:22:26.540 "zone_management": false, 00:22:26.540 "zone_append": false, 00:22:26.540 "compare": false, 00:22:26.540 "compare_and_write": false, 00:22:26.540 "abort": true, 00:22:26.540 "seek_hole": false, 00:22:26.540 "seek_data": false, 00:22:26.540 "copy": true, 00:22:26.540 "nvme_iov_md": false 00:22:26.540 }, 00:22:26.540 "memory_domains": [ 00:22:26.540 { 00:22:26.540 "dma_device_id": "system", 00:22:26.540 "dma_device_type": 1 00:22:26.540 }, 00:22:26.540 { 00:22:26.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.540 "dma_device_type": 2 00:22:26.540 } 00:22:26.540 ], 00:22:26.540 "driver_specific": {} 00:22:26.540 } 00:22:26.540 ] 00:22:26.540 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.540 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:26.540 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:26.540 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:26.540 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:26.540 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:26.540 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:26.540 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:26.540 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:26.540 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:26.540 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:26.540 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:26.540 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:26.540 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:26.540 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.540 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.540 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.540 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:26.540 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.540 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:26.540 "name": "Existed_Raid", 00:22:26.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.540 "strip_size_kb": 0, 00:22:26.540 "state": "configuring", 00:22:26.540 "raid_level": "raid1", 00:22:26.540 "superblock": false, 00:22:26.540 "num_base_bdevs": 4, 00:22:26.540 "num_base_bdevs_discovered": 3, 00:22:26.540 "num_base_bdevs_operational": 4, 00:22:26.540 "base_bdevs_list": [ 00:22:26.540 { 00:22:26.540 "name": "BaseBdev1", 00:22:26.540 "uuid": "fbfbb6ed-5a66-4495-ad0c-a2e8ee05b161", 00:22:26.540 "is_configured": true, 00:22:26.540 "data_offset": 0, 00:22:26.540 "data_size": 65536 00:22:26.540 }, 00:22:26.540 { 00:22:26.540 "name": "BaseBdev2", 00:22:26.540 "uuid": "1e8462f3-76c3-47fe-9fdc-78e578e5adb5", 00:22:26.540 "is_configured": true, 00:22:26.540 "data_offset": 0, 00:22:26.540 "data_size": 65536 00:22:26.540 }, 00:22:26.540 { 00:22:26.540 "name": "BaseBdev3", 00:22:26.540 "uuid": "a1d2eedd-eee5-43dc-8640-1136c833248e", 00:22:26.540 "is_configured": true, 00:22:26.540 "data_offset": 0, 00:22:26.540 "data_size": 65536 00:22:26.540 }, 00:22:26.540 { 00:22:26.540 "name": "BaseBdev4", 00:22:26.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.540 "is_configured": false, 00:22:26.540 "data_offset": 0, 00:22:26.540 "data_size": 0 00:22:26.540 } 00:22:26.540 ] 00:22:26.540 }' 00:22:26.540 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:26.540 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.799 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:22:26.799 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.799 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.060 [2024-12-05 12:54:09.384748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:27.060 [2024-12-05 12:54:09.384800] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:27.060 [2024-12-05 12:54:09.384808] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:27.060 [2024-12-05 12:54:09.385066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:27.060 [2024-12-05 12:54:09.385221] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:27.060 [2024-12-05 12:54:09.385232] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:27.060 [2024-12-05 12:54:09.385467] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:27.060 BaseBdev4 00:22:27.060 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.060 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:22:27.060 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:22:27.060 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:27.060 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:27.060 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:27.060 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:27.060 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:27.060 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.060 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.060 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.060 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:27.060 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.060 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.060 [ 00:22:27.060 { 00:22:27.060 "name": "BaseBdev4", 00:22:27.060 "aliases": [ 00:22:27.060 "05198e32-3801-4dbd-9eee-bd6102f33b98" 00:22:27.060 ], 00:22:27.060 "product_name": "Malloc disk", 00:22:27.060 "block_size": 512, 00:22:27.060 "num_blocks": 65536, 00:22:27.060 "uuid": "05198e32-3801-4dbd-9eee-bd6102f33b98", 00:22:27.060 "assigned_rate_limits": { 00:22:27.060 "rw_ios_per_sec": 0, 00:22:27.060 "rw_mbytes_per_sec": 0, 00:22:27.060 "r_mbytes_per_sec": 0, 00:22:27.060 "w_mbytes_per_sec": 0 00:22:27.060 }, 00:22:27.060 "claimed": true, 00:22:27.060 "claim_type": "exclusive_write", 00:22:27.060 "zoned": false, 00:22:27.060 "supported_io_types": { 00:22:27.060 "read": true, 00:22:27.060 "write": true, 00:22:27.060 "unmap": true, 00:22:27.060 "flush": true, 00:22:27.060 "reset": true, 00:22:27.060 "nvme_admin": false, 00:22:27.060 "nvme_io": false, 00:22:27.060 "nvme_io_md": false, 00:22:27.060 "write_zeroes": true, 00:22:27.060 "zcopy": true, 00:22:27.060 "get_zone_info": false, 00:22:27.060 "zone_management": false, 00:22:27.060 "zone_append": false, 00:22:27.060 "compare": false, 00:22:27.060 "compare_and_write": false, 00:22:27.060 "abort": true, 00:22:27.060 "seek_hole": false, 00:22:27.060 "seek_data": false, 00:22:27.060 "copy": true, 00:22:27.060 "nvme_iov_md": false 00:22:27.060 }, 00:22:27.060 "memory_domains": [ 00:22:27.060 { 00:22:27.060 "dma_device_id": "system", 00:22:27.060 "dma_device_type": 1 00:22:27.060 }, 00:22:27.060 { 00:22:27.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:27.060 "dma_device_type": 2 00:22:27.060 } 00:22:27.060 ], 00:22:27.060 "driver_specific": {} 00:22:27.060 } 00:22:27.060 ] 00:22:27.060 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.060 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:27.060 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:27.060 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:27.060 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:22:27.060 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:27.060 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:27.060 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:27.060 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:27.060 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:27.060 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:27.060 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:27.060 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:27.060 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:27.060 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:27.060 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.060 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.060 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:27.060 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.060 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:27.060 "name": "Existed_Raid", 00:22:27.060 "uuid": "02399dae-872b-4039-902c-c8a3355d00b9", 00:22:27.060 "strip_size_kb": 0, 00:22:27.060 "state": "online", 00:22:27.060 "raid_level": "raid1", 00:22:27.060 "superblock": false, 00:22:27.060 "num_base_bdevs": 4, 00:22:27.060 "num_base_bdevs_discovered": 4, 00:22:27.060 "num_base_bdevs_operational": 4, 00:22:27.060 "base_bdevs_list": [ 00:22:27.060 { 00:22:27.060 "name": "BaseBdev1", 00:22:27.060 "uuid": "fbfbb6ed-5a66-4495-ad0c-a2e8ee05b161", 00:22:27.060 "is_configured": true, 00:22:27.060 "data_offset": 0, 00:22:27.060 "data_size": 65536 00:22:27.060 }, 00:22:27.060 { 00:22:27.060 "name": "BaseBdev2", 00:22:27.061 "uuid": "1e8462f3-76c3-47fe-9fdc-78e578e5adb5", 00:22:27.061 "is_configured": true, 00:22:27.061 "data_offset": 0, 00:22:27.061 "data_size": 65536 00:22:27.061 }, 00:22:27.061 { 00:22:27.061 "name": "BaseBdev3", 00:22:27.061 "uuid": "a1d2eedd-eee5-43dc-8640-1136c833248e", 00:22:27.061 "is_configured": true, 00:22:27.061 "data_offset": 0, 00:22:27.061 "data_size": 65536 00:22:27.061 }, 00:22:27.061 { 00:22:27.061 "name": "BaseBdev4", 00:22:27.061 "uuid": "05198e32-3801-4dbd-9eee-bd6102f33b98", 00:22:27.061 "is_configured": true, 00:22:27.061 "data_offset": 0, 00:22:27.061 "data_size": 65536 00:22:27.061 } 00:22:27.061 ] 00:22:27.061 }' 00:22:27.061 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:27.061 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.320 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:27.320 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:27.321 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:27.321 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:27.321 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:27.321 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:27.321 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:27.321 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.321 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.321 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:27.321 [2024-12-05 12:54:09.741254] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:27.321 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.321 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:27.321 "name": "Existed_Raid", 00:22:27.321 "aliases": [ 00:22:27.321 "02399dae-872b-4039-902c-c8a3355d00b9" 00:22:27.321 ], 00:22:27.321 "product_name": "Raid Volume", 00:22:27.321 "block_size": 512, 00:22:27.321 "num_blocks": 65536, 00:22:27.321 "uuid": "02399dae-872b-4039-902c-c8a3355d00b9", 00:22:27.321 "assigned_rate_limits": { 00:22:27.321 "rw_ios_per_sec": 0, 00:22:27.321 "rw_mbytes_per_sec": 0, 00:22:27.321 "r_mbytes_per_sec": 0, 00:22:27.321 "w_mbytes_per_sec": 0 00:22:27.321 }, 00:22:27.321 "claimed": false, 00:22:27.321 "zoned": false, 00:22:27.321 "supported_io_types": { 00:22:27.321 "read": true, 00:22:27.321 "write": true, 00:22:27.321 "unmap": false, 00:22:27.321 "flush": false, 00:22:27.321 "reset": true, 00:22:27.321 "nvme_admin": false, 00:22:27.321 "nvme_io": false, 00:22:27.321 "nvme_io_md": false, 00:22:27.321 "write_zeroes": true, 00:22:27.321 "zcopy": false, 00:22:27.321 "get_zone_info": false, 00:22:27.321 "zone_management": false, 00:22:27.321 "zone_append": false, 00:22:27.321 "compare": false, 00:22:27.321 "compare_and_write": false, 00:22:27.321 "abort": false, 00:22:27.321 "seek_hole": false, 00:22:27.321 "seek_data": false, 00:22:27.321 "copy": false, 00:22:27.321 "nvme_iov_md": false 00:22:27.321 }, 00:22:27.321 "memory_domains": [ 00:22:27.321 { 00:22:27.321 "dma_device_id": "system", 00:22:27.321 "dma_device_type": 1 00:22:27.321 }, 00:22:27.321 { 00:22:27.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:27.321 "dma_device_type": 2 00:22:27.321 }, 00:22:27.321 { 00:22:27.321 "dma_device_id": "system", 00:22:27.321 "dma_device_type": 1 00:22:27.321 }, 00:22:27.321 { 00:22:27.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:27.321 "dma_device_type": 2 00:22:27.321 }, 00:22:27.321 { 00:22:27.321 "dma_device_id": "system", 00:22:27.321 "dma_device_type": 1 00:22:27.321 }, 00:22:27.321 { 00:22:27.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:27.321 "dma_device_type": 2 00:22:27.321 }, 00:22:27.321 { 00:22:27.321 "dma_device_id": "system", 00:22:27.321 "dma_device_type": 1 00:22:27.321 }, 00:22:27.321 { 00:22:27.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:27.321 "dma_device_type": 2 00:22:27.321 } 00:22:27.321 ], 00:22:27.321 "driver_specific": { 00:22:27.321 "raid": { 00:22:27.321 "uuid": "02399dae-872b-4039-902c-c8a3355d00b9", 00:22:27.321 "strip_size_kb": 0, 00:22:27.321 "state": "online", 00:22:27.321 "raid_level": "raid1", 00:22:27.321 "superblock": false, 00:22:27.321 "num_base_bdevs": 4, 00:22:27.321 "num_base_bdevs_discovered": 4, 00:22:27.321 "num_base_bdevs_operational": 4, 00:22:27.321 "base_bdevs_list": [ 00:22:27.321 { 00:22:27.321 "name": "BaseBdev1", 00:22:27.321 "uuid": "fbfbb6ed-5a66-4495-ad0c-a2e8ee05b161", 00:22:27.321 "is_configured": true, 00:22:27.321 "data_offset": 0, 00:22:27.321 "data_size": 65536 00:22:27.321 }, 00:22:27.321 { 00:22:27.321 "name": "BaseBdev2", 00:22:27.321 "uuid": "1e8462f3-76c3-47fe-9fdc-78e578e5adb5", 00:22:27.321 "is_configured": true, 00:22:27.321 "data_offset": 0, 00:22:27.321 "data_size": 65536 00:22:27.321 }, 00:22:27.321 { 00:22:27.321 "name": "BaseBdev3", 00:22:27.321 "uuid": "a1d2eedd-eee5-43dc-8640-1136c833248e", 00:22:27.321 "is_configured": true, 00:22:27.321 "data_offset": 0, 00:22:27.321 "data_size": 65536 00:22:27.321 }, 00:22:27.321 { 00:22:27.321 "name": "BaseBdev4", 00:22:27.321 "uuid": "05198e32-3801-4dbd-9eee-bd6102f33b98", 00:22:27.321 "is_configured": true, 00:22:27.321 "data_offset": 0, 00:22:27.321 "data_size": 65536 00:22:27.321 } 00:22:27.321 ] 00:22:27.321 } 00:22:27.321 } 00:22:27.321 }' 00:22:27.321 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:27.321 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:27.321 BaseBdev2 00:22:27.321 BaseBdev3 00:22:27.321 BaseBdev4' 00:22:27.321 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:27.321 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:27.321 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:27.321 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:27.321 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.321 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.321 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:27.321 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.321 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:27.321 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:27.321 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:27.321 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:27.321 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:27.321 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.321 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.321 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.584 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:27.584 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:27.584 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:27.584 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:27.584 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:27.584 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.584 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.584 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.584 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:27.584 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:27.584 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:27.584 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:27.584 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:22:27.584 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.584 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.584 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.584 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:27.584 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:27.584 12:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:27.584 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.584 12:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.584 [2024-12-05 12:54:09.981007] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:27.584 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.584 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:27.584 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:22:27.584 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:27.584 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:22:27.584 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:22:27.584 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:22:27.584 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:27.584 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:27.584 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:27.584 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:27.584 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:27.584 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:27.584 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:27.584 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:27.584 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:27.584 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:27.584 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:27.584 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.584 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.584 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.584 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:27.584 "name": "Existed_Raid", 00:22:27.584 "uuid": "02399dae-872b-4039-902c-c8a3355d00b9", 00:22:27.584 "strip_size_kb": 0, 00:22:27.584 "state": "online", 00:22:27.584 "raid_level": "raid1", 00:22:27.584 "superblock": false, 00:22:27.584 "num_base_bdevs": 4, 00:22:27.584 "num_base_bdevs_discovered": 3, 00:22:27.584 "num_base_bdevs_operational": 3, 00:22:27.584 "base_bdevs_list": [ 00:22:27.584 { 00:22:27.584 "name": null, 00:22:27.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.584 "is_configured": false, 00:22:27.584 "data_offset": 0, 00:22:27.584 "data_size": 65536 00:22:27.584 }, 00:22:27.584 { 00:22:27.584 "name": "BaseBdev2", 00:22:27.584 "uuid": "1e8462f3-76c3-47fe-9fdc-78e578e5adb5", 00:22:27.584 "is_configured": true, 00:22:27.584 "data_offset": 0, 00:22:27.584 "data_size": 65536 00:22:27.584 }, 00:22:27.584 { 00:22:27.584 "name": "BaseBdev3", 00:22:27.584 "uuid": "a1d2eedd-eee5-43dc-8640-1136c833248e", 00:22:27.584 "is_configured": true, 00:22:27.584 "data_offset": 0, 00:22:27.584 "data_size": 65536 00:22:27.584 }, 00:22:27.584 { 00:22:27.584 "name": "BaseBdev4", 00:22:27.584 "uuid": "05198e32-3801-4dbd-9eee-bd6102f33b98", 00:22:27.584 "is_configured": true, 00:22:27.584 "data_offset": 0, 00:22:27.584 "data_size": 65536 00:22:27.584 } 00:22:27.584 ] 00:22:27.584 }' 00:22:27.584 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:27.584 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.845 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:27.845 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:27.845 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:27.845 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:27.845 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.845 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.845 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.845 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:27.845 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:27.845 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:27.845 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.845 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.104 [2024-12-05 12:54:10.429203] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:28.104 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.104 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:28.104 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:28.104 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:28.104 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:28.104 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.104 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.104 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.104 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:28.104 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:28.104 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:22:28.104 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.104 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.105 [2024-12-05 12:54:10.523393] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:28.105 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.105 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:28.105 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:28.105 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:28.105 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:28.105 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.105 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.105 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.105 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:28.105 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:28.105 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:22:28.105 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.105 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.105 [2024-12-05 12:54:10.629275] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:28.105 [2024-12-05 12:54:10.629362] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:28.366 [2024-12-05 12:54:10.688400] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:28.366 [2024-12-05 12:54:10.688446] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:28.366 [2024-12-05 12:54:10.688457] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:28.366 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.366 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:28.366 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:28.366 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:28.366 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.366 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.366 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:28.366 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.366 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:28.366 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:28.366 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:22:28.366 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:22:28.366 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:28.366 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:28.366 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.366 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.366 BaseBdev2 00:22:28.366 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.366 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.367 [ 00:22:28.367 { 00:22:28.367 "name": "BaseBdev2", 00:22:28.367 "aliases": [ 00:22:28.367 "fcb01c4a-c67b-4646-9bcd-4b6bb0e63d50" 00:22:28.367 ], 00:22:28.367 "product_name": "Malloc disk", 00:22:28.367 "block_size": 512, 00:22:28.367 "num_blocks": 65536, 00:22:28.367 "uuid": "fcb01c4a-c67b-4646-9bcd-4b6bb0e63d50", 00:22:28.367 "assigned_rate_limits": { 00:22:28.367 "rw_ios_per_sec": 0, 00:22:28.367 "rw_mbytes_per_sec": 0, 00:22:28.367 "r_mbytes_per_sec": 0, 00:22:28.367 "w_mbytes_per_sec": 0 00:22:28.367 }, 00:22:28.367 "claimed": false, 00:22:28.367 "zoned": false, 00:22:28.367 "supported_io_types": { 00:22:28.367 "read": true, 00:22:28.367 "write": true, 00:22:28.367 "unmap": true, 00:22:28.367 "flush": true, 00:22:28.367 "reset": true, 00:22:28.367 "nvme_admin": false, 00:22:28.367 "nvme_io": false, 00:22:28.367 "nvme_io_md": false, 00:22:28.367 "write_zeroes": true, 00:22:28.367 "zcopy": true, 00:22:28.367 "get_zone_info": false, 00:22:28.367 "zone_management": false, 00:22:28.367 "zone_append": false, 00:22:28.367 "compare": false, 00:22:28.367 "compare_and_write": false, 00:22:28.367 "abort": true, 00:22:28.367 "seek_hole": false, 00:22:28.367 "seek_data": false, 00:22:28.367 "copy": true, 00:22:28.367 "nvme_iov_md": false 00:22:28.367 }, 00:22:28.367 "memory_domains": [ 00:22:28.367 { 00:22:28.367 "dma_device_id": "system", 00:22:28.367 "dma_device_type": 1 00:22:28.367 }, 00:22:28.367 { 00:22:28.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:28.367 "dma_device_type": 2 00:22:28.367 } 00:22:28.367 ], 00:22:28.367 "driver_specific": {} 00:22:28.367 } 00:22:28.367 ] 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.367 BaseBdev3 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.367 [ 00:22:28.367 { 00:22:28.367 "name": "BaseBdev3", 00:22:28.367 "aliases": [ 00:22:28.367 "1f7a66d5-f02c-4b76-8c66-39999e365af7" 00:22:28.367 ], 00:22:28.367 "product_name": "Malloc disk", 00:22:28.367 "block_size": 512, 00:22:28.367 "num_blocks": 65536, 00:22:28.367 "uuid": "1f7a66d5-f02c-4b76-8c66-39999e365af7", 00:22:28.367 "assigned_rate_limits": { 00:22:28.367 "rw_ios_per_sec": 0, 00:22:28.367 "rw_mbytes_per_sec": 0, 00:22:28.367 "r_mbytes_per_sec": 0, 00:22:28.367 "w_mbytes_per_sec": 0 00:22:28.367 }, 00:22:28.367 "claimed": false, 00:22:28.367 "zoned": false, 00:22:28.367 "supported_io_types": { 00:22:28.367 "read": true, 00:22:28.367 "write": true, 00:22:28.367 "unmap": true, 00:22:28.367 "flush": true, 00:22:28.367 "reset": true, 00:22:28.367 "nvme_admin": false, 00:22:28.367 "nvme_io": false, 00:22:28.367 "nvme_io_md": false, 00:22:28.367 "write_zeroes": true, 00:22:28.367 "zcopy": true, 00:22:28.367 "get_zone_info": false, 00:22:28.367 "zone_management": false, 00:22:28.367 "zone_append": false, 00:22:28.367 "compare": false, 00:22:28.367 "compare_and_write": false, 00:22:28.367 "abort": true, 00:22:28.367 "seek_hole": false, 00:22:28.367 "seek_data": false, 00:22:28.367 "copy": true, 00:22:28.367 "nvme_iov_md": false 00:22:28.367 }, 00:22:28.367 "memory_domains": [ 00:22:28.367 { 00:22:28.367 "dma_device_id": "system", 00:22:28.367 "dma_device_type": 1 00:22:28.367 }, 00:22:28.367 { 00:22:28.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:28.367 "dma_device_type": 2 00:22:28.367 } 00:22:28.367 ], 00:22:28.367 "driver_specific": {} 00:22:28.367 } 00:22:28.367 ] 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.367 BaseBdev4 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.367 [ 00:22:28.367 { 00:22:28.367 "name": "BaseBdev4", 00:22:28.367 "aliases": [ 00:22:28.367 "40b55ccc-5634-4f01-b875-8bba50bdf58d" 00:22:28.367 ], 00:22:28.367 "product_name": "Malloc disk", 00:22:28.367 "block_size": 512, 00:22:28.367 "num_blocks": 65536, 00:22:28.367 "uuid": "40b55ccc-5634-4f01-b875-8bba50bdf58d", 00:22:28.367 "assigned_rate_limits": { 00:22:28.367 "rw_ios_per_sec": 0, 00:22:28.367 "rw_mbytes_per_sec": 0, 00:22:28.367 "r_mbytes_per_sec": 0, 00:22:28.367 "w_mbytes_per_sec": 0 00:22:28.367 }, 00:22:28.367 "claimed": false, 00:22:28.367 "zoned": false, 00:22:28.367 "supported_io_types": { 00:22:28.367 "read": true, 00:22:28.367 "write": true, 00:22:28.367 "unmap": true, 00:22:28.367 "flush": true, 00:22:28.367 "reset": true, 00:22:28.367 "nvme_admin": false, 00:22:28.367 "nvme_io": false, 00:22:28.367 "nvme_io_md": false, 00:22:28.367 "write_zeroes": true, 00:22:28.367 "zcopy": true, 00:22:28.367 "get_zone_info": false, 00:22:28.367 "zone_management": false, 00:22:28.367 "zone_append": false, 00:22:28.367 "compare": false, 00:22:28.367 "compare_and_write": false, 00:22:28.367 "abort": true, 00:22:28.367 "seek_hole": false, 00:22:28.367 "seek_data": false, 00:22:28.367 "copy": true, 00:22:28.367 "nvme_iov_md": false 00:22:28.367 }, 00:22:28.367 "memory_domains": [ 00:22:28.367 { 00:22:28.367 "dma_device_id": "system", 00:22:28.367 "dma_device_type": 1 00:22:28.367 }, 00:22:28.367 { 00:22:28.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:28.367 "dma_device_type": 2 00:22:28.367 } 00:22:28.367 ], 00:22:28.367 "driver_specific": {} 00:22:28.367 } 00:22:28.367 ] 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.367 [2024-12-05 12:54:10.891654] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:28.367 [2024-12-05 12:54:10.891800] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:28.367 [2024-12-05 12:54:10.891873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:28.367 [2024-12-05 12:54:10.893732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:28.367 [2024-12-05 12:54:10.893851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:28.367 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:28.368 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.368 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.368 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.368 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:28.368 "name": "Existed_Raid", 00:22:28.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.368 "strip_size_kb": 0, 00:22:28.368 "state": "configuring", 00:22:28.368 "raid_level": "raid1", 00:22:28.368 "superblock": false, 00:22:28.368 "num_base_bdevs": 4, 00:22:28.368 "num_base_bdevs_discovered": 3, 00:22:28.368 "num_base_bdevs_operational": 4, 00:22:28.368 "base_bdevs_list": [ 00:22:28.368 { 00:22:28.368 "name": "BaseBdev1", 00:22:28.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.368 "is_configured": false, 00:22:28.368 "data_offset": 0, 00:22:28.368 "data_size": 0 00:22:28.368 }, 00:22:28.368 { 00:22:28.368 "name": "BaseBdev2", 00:22:28.368 "uuid": "fcb01c4a-c67b-4646-9bcd-4b6bb0e63d50", 00:22:28.368 "is_configured": true, 00:22:28.368 "data_offset": 0, 00:22:28.368 "data_size": 65536 00:22:28.368 }, 00:22:28.368 { 00:22:28.368 "name": "BaseBdev3", 00:22:28.368 "uuid": "1f7a66d5-f02c-4b76-8c66-39999e365af7", 00:22:28.368 "is_configured": true, 00:22:28.368 "data_offset": 0, 00:22:28.368 "data_size": 65536 00:22:28.368 }, 00:22:28.368 { 00:22:28.368 "name": "BaseBdev4", 00:22:28.368 "uuid": "40b55ccc-5634-4f01-b875-8bba50bdf58d", 00:22:28.368 "is_configured": true, 00:22:28.368 "data_offset": 0, 00:22:28.368 "data_size": 65536 00:22:28.368 } 00:22:28.368 ] 00:22:28.368 }' 00:22:28.368 12:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:28.368 12:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.938 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:22:28.938 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.938 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.938 [2024-12-05 12:54:11.215737] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:28.938 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.938 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:28.938 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:28.938 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:28.938 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:28.938 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:28.938 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:28.938 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:28.938 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:28.938 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:28.938 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:28.938 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:28.938 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.938 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.938 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:28.938 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.938 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:28.938 "name": "Existed_Raid", 00:22:28.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.938 "strip_size_kb": 0, 00:22:28.938 "state": "configuring", 00:22:28.938 "raid_level": "raid1", 00:22:28.938 "superblock": false, 00:22:28.938 "num_base_bdevs": 4, 00:22:28.938 "num_base_bdevs_discovered": 2, 00:22:28.938 "num_base_bdevs_operational": 4, 00:22:28.938 "base_bdevs_list": [ 00:22:28.938 { 00:22:28.938 "name": "BaseBdev1", 00:22:28.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.938 "is_configured": false, 00:22:28.938 "data_offset": 0, 00:22:28.938 "data_size": 0 00:22:28.938 }, 00:22:28.938 { 00:22:28.938 "name": null, 00:22:28.938 "uuid": "fcb01c4a-c67b-4646-9bcd-4b6bb0e63d50", 00:22:28.938 "is_configured": false, 00:22:28.938 "data_offset": 0, 00:22:28.938 "data_size": 65536 00:22:28.938 }, 00:22:28.938 { 00:22:28.938 "name": "BaseBdev3", 00:22:28.938 "uuid": "1f7a66d5-f02c-4b76-8c66-39999e365af7", 00:22:28.938 "is_configured": true, 00:22:28.938 "data_offset": 0, 00:22:28.938 "data_size": 65536 00:22:28.938 }, 00:22:28.938 { 00:22:28.938 "name": "BaseBdev4", 00:22:28.938 "uuid": "40b55ccc-5634-4f01-b875-8bba50bdf58d", 00:22:28.938 "is_configured": true, 00:22:28.938 "data_offset": 0, 00:22:28.938 "data_size": 65536 00:22:28.938 } 00:22:28.938 ] 00:22:28.938 }' 00:22:28.938 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:28.938 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.198 [2024-12-05 12:54:11.621900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:29.198 BaseBdev1 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.198 [ 00:22:29.198 { 00:22:29.198 "name": "BaseBdev1", 00:22:29.198 "aliases": [ 00:22:29.198 "006c21b6-3fdf-49e0-b69e-b29067c93c3d" 00:22:29.198 ], 00:22:29.198 "product_name": "Malloc disk", 00:22:29.198 "block_size": 512, 00:22:29.198 "num_blocks": 65536, 00:22:29.198 "uuid": "006c21b6-3fdf-49e0-b69e-b29067c93c3d", 00:22:29.198 "assigned_rate_limits": { 00:22:29.198 "rw_ios_per_sec": 0, 00:22:29.198 "rw_mbytes_per_sec": 0, 00:22:29.198 "r_mbytes_per_sec": 0, 00:22:29.198 "w_mbytes_per_sec": 0 00:22:29.198 }, 00:22:29.198 "claimed": true, 00:22:29.198 "claim_type": "exclusive_write", 00:22:29.198 "zoned": false, 00:22:29.198 "supported_io_types": { 00:22:29.198 "read": true, 00:22:29.198 "write": true, 00:22:29.198 "unmap": true, 00:22:29.198 "flush": true, 00:22:29.198 "reset": true, 00:22:29.198 "nvme_admin": false, 00:22:29.198 "nvme_io": false, 00:22:29.198 "nvme_io_md": false, 00:22:29.198 "write_zeroes": true, 00:22:29.198 "zcopy": true, 00:22:29.198 "get_zone_info": false, 00:22:29.198 "zone_management": false, 00:22:29.198 "zone_append": false, 00:22:29.198 "compare": false, 00:22:29.198 "compare_and_write": false, 00:22:29.198 "abort": true, 00:22:29.198 "seek_hole": false, 00:22:29.198 "seek_data": false, 00:22:29.198 "copy": true, 00:22:29.198 "nvme_iov_md": false 00:22:29.198 }, 00:22:29.198 "memory_domains": [ 00:22:29.198 { 00:22:29.198 "dma_device_id": "system", 00:22:29.198 "dma_device_type": 1 00:22:29.198 }, 00:22:29.198 { 00:22:29.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:29.198 "dma_device_type": 2 00:22:29.198 } 00:22:29.198 ], 00:22:29.198 "driver_specific": {} 00:22:29.198 } 00:22:29.198 ] 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:29.198 "name": "Existed_Raid", 00:22:29.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.198 "strip_size_kb": 0, 00:22:29.198 "state": "configuring", 00:22:29.198 "raid_level": "raid1", 00:22:29.198 "superblock": false, 00:22:29.198 "num_base_bdevs": 4, 00:22:29.198 "num_base_bdevs_discovered": 3, 00:22:29.198 "num_base_bdevs_operational": 4, 00:22:29.198 "base_bdevs_list": [ 00:22:29.198 { 00:22:29.198 "name": "BaseBdev1", 00:22:29.198 "uuid": "006c21b6-3fdf-49e0-b69e-b29067c93c3d", 00:22:29.198 "is_configured": true, 00:22:29.198 "data_offset": 0, 00:22:29.198 "data_size": 65536 00:22:29.198 }, 00:22:29.198 { 00:22:29.198 "name": null, 00:22:29.198 "uuid": "fcb01c4a-c67b-4646-9bcd-4b6bb0e63d50", 00:22:29.198 "is_configured": false, 00:22:29.198 "data_offset": 0, 00:22:29.198 "data_size": 65536 00:22:29.198 }, 00:22:29.198 { 00:22:29.198 "name": "BaseBdev3", 00:22:29.198 "uuid": "1f7a66d5-f02c-4b76-8c66-39999e365af7", 00:22:29.198 "is_configured": true, 00:22:29.198 "data_offset": 0, 00:22:29.198 "data_size": 65536 00:22:29.198 }, 00:22:29.198 { 00:22:29.198 "name": "BaseBdev4", 00:22:29.198 "uuid": "40b55ccc-5634-4f01-b875-8bba50bdf58d", 00:22:29.198 "is_configured": true, 00:22:29.198 "data_offset": 0, 00:22:29.198 "data_size": 65536 00:22:29.198 } 00:22:29.198 ] 00:22:29.198 }' 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:29.198 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.460 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:29.460 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:29.460 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.460 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.460 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.460 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:22:29.460 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:22:29.460 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.460 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.460 [2024-12-05 12:54:11.986075] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:29.460 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.460 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:29.460 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:29.460 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:29.460 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:29.460 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:29.460 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:29.460 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:29.460 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:29.460 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:29.460 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:29.460 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:29.460 12:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:29.460 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.460 12:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.460 12:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.460 12:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:29.460 "name": "Existed_Raid", 00:22:29.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.460 "strip_size_kb": 0, 00:22:29.460 "state": "configuring", 00:22:29.460 "raid_level": "raid1", 00:22:29.460 "superblock": false, 00:22:29.460 "num_base_bdevs": 4, 00:22:29.460 "num_base_bdevs_discovered": 2, 00:22:29.460 "num_base_bdevs_operational": 4, 00:22:29.460 "base_bdevs_list": [ 00:22:29.460 { 00:22:29.460 "name": "BaseBdev1", 00:22:29.460 "uuid": "006c21b6-3fdf-49e0-b69e-b29067c93c3d", 00:22:29.460 "is_configured": true, 00:22:29.460 "data_offset": 0, 00:22:29.460 "data_size": 65536 00:22:29.460 }, 00:22:29.460 { 00:22:29.460 "name": null, 00:22:29.460 "uuid": "fcb01c4a-c67b-4646-9bcd-4b6bb0e63d50", 00:22:29.460 "is_configured": false, 00:22:29.460 "data_offset": 0, 00:22:29.460 "data_size": 65536 00:22:29.460 }, 00:22:29.460 { 00:22:29.460 "name": null, 00:22:29.460 "uuid": "1f7a66d5-f02c-4b76-8c66-39999e365af7", 00:22:29.460 "is_configured": false, 00:22:29.460 "data_offset": 0, 00:22:29.460 "data_size": 65536 00:22:29.460 }, 00:22:29.460 { 00:22:29.460 "name": "BaseBdev4", 00:22:29.460 "uuid": "40b55ccc-5634-4f01-b875-8bba50bdf58d", 00:22:29.460 "is_configured": true, 00:22:29.460 "data_offset": 0, 00:22:29.460 "data_size": 65536 00:22:29.460 } 00:22:29.460 ] 00:22:29.460 }' 00:22:29.460 12:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:29.460 12:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.722 12:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:29.722 12:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.722 12:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.722 12:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:29.722 12:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.984 12:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:22:29.984 12:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:29.984 12:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.984 12:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.984 [2024-12-05 12:54:12.310147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:29.984 12:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.984 12:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:29.984 12:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:29.984 12:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:29.984 12:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:29.984 12:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:29.984 12:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:29.984 12:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:29.984 12:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:29.984 12:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:29.984 12:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:29.984 12:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:29.984 12:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.984 12:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.984 12:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:29.984 12:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.984 12:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:29.984 "name": "Existed_Raid", 00:22:29.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.984 "strip_size_kb": 0, 00:22:29.984 "state": "configuring", 00:22:29.984 "raid_level": "raid1", 00:22:29.984 "superblock": false, 00:22:29.984 "num_base_bdevs": 4, 00:22:29.984 "num_base_bdevs_discovered": 3, 00:22:29.984 "num_base_bdevs_operational": 4, 00:22:29.984 "base_bdevs_list": [ 00:22:29.984 { 00:22:29.984 "name": "BaseBdev1", 00:22:29.984 "uuid": "006c21b6-3fdf-49e0-b69e-b29067c93c3d", 00:22:29.984 "is_configured": true, 00:22:29.984 "data_offset": 0, 00:22:29.984 "data_size": 65536 00:22:29.984 }, 00:22:29.984 { 00:22:29.984 "name": null, 00:22:29.984 "uuid": "fcb01c4a-c67b-4646-9bcd-4b6bb0e63d50", 00:22:29.984 "is_configured": false, 00:22:29.984 "data_offset": 0, 00:22:29.984 "data_size": 65536 00:22:29.984 }, 00:22:29.984 { 00:22:29.984 "name": "BaseBdev3", 00:22:29.984 "uuid": "1f7a66d5-f02c-4b76-8c66-39999e365af7", 00:22:29.984 "is_configured": true, 00:22:29.984 "data_offset": 0, 00:22:29.984 "data_size": 65536 00:22:29.984 }, 00:22:29.984 { 00:22:29.984 "name": "BaseBdev4", 00:22:29.984 "uuid": "40b55ccc-5634-4f01-b875-8bba50bdf58d", 00:22:29.984 "is_configured": true, 00:22:29.984 "data_offset": 0, 00:22:29.984 "data_size": 65536 00:22:29.984 } 00:22:29.984 ] 00:22:29.984 }' 00:22:29.984 12:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:29.984 12:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.245 12:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.245 12:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.245 12:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.245 12:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:30.245 12:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.245 12:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:22:30.245 12:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:30.245 12:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.245 12:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.245 [2024-12-05 12:54:12.666259] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:30.245 12:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.245 12:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:30.245 12:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:30.245 12:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:30.245 12:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:30.245 12:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:30.245 12:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:30.245 12:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:30.245 12:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:30.245 12:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:30.245 12:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:30.245 12:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.245 12:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:30.245 12:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.245 12:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.245 12:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.245 12:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:30.245 "name": "Existed_Raid", 00:22:30.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.245 "strip_size_kb": 0, 00:22:30.245 "state": "configuring", 00:22:30.245 "raid_level": "raid1", 00:22:30.245 "superblock": false, 00:22:30.245 "num_base_bdevs": 4, 00:22:30.245 "num_base_bdevs_discovered": 2, 00:22:30.245 "num_base_bdevs_operational": 4, 00:22:30.245 "base_bdevs_list": [ 00:22:30.245 { 00:22:30.245 "name": null, 00:22:30.245 "uuid": "006c21b6-3fdf-49e0-b69e-b29067c93c3d", 00:22:30.245 "is_configured": false, 00:22:30.245 "data_offset": 0, 00:22:30.245 "data_size": 65536 00:22:30.245 }, 00:22:30.245 { 00:22:30.245 "name": null, 00:22:30.245 "uuid": "fcb01c4a-c67b-4646-9bcd-4b6bb0e63d50", 00:22:30.245 "is_configured": false, 00:22:30.245 "data_offset": 0, 00:22:30.245 "data_size": 65536 00:22:30.245 }, 00:22:30.245 { 00:22:30.245 "name": "BaseBdev3", 00:22:30.245 "uuid": "1f7a66d5-f02c-4b76-8c66-39999e365af7", 00:22:30.245 "is_configured": true, 00:22:30.245 "data_offset": 0, 00:22:30.245 "data_size": 65536 00:22:30.245 }, 00:22:30.245 { 00:22:30.245 "name": "BaseBdev4", 00:22:30.245 "uuid": "40b55ccc-5634-4f01-b875-8bba50bdf58d", 00:22:30.245 "is_configured": true, 00:22:30.245 "data_offset": 0, 00:22:30.245 "data_size": 65536 00:22:30.245 } 00:22:30.245 ] 00:22:30.245 }' 00:22:30.245 12:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:30.245 12:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.564 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.564 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.564 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.564 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:30.564 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.564 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:22:30.565 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:30.565 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.565 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.565 [2024-12-05 12:54:13.080846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:30.565 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.565 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:30.565 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:30.565 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:30.565 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:30.565 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:30.565 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:30.565 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:30.565 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:30.565 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:30.565 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:30.565 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.565 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.565 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.565 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:30.565 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.565 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:30.565 "name": "Existed_Raid", 00:22:30.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.565 "strip_size_kb": 0, 00:22:30.565 "state": "configuring", 00:22:30.565 "raid_level": "raid1", 00:22:30.565 "superblock": false, 00:22:30.565 "num_base_bdevs": 4, 00:22:30.565 "num_base_bdevs_discovered": 3, 00:22:30.565 "num_base_bdevs_operational": 4, 00:22:30.565 "base_bdevs_list": [ 00:22:30.565 { 00:22:30.565 "name": null, 00:22:30.565 "uuid": "006c21b6-3fdf-49e0-b69e-b29067c93c3d", 00:22:30.565 "is_configured": false, 00:22:30.565 "data_offset": 0, 00:22:30.565 "data_size": 65536 00:22:30.565 }, 00:22:30.565 { 00:22:30.565 "name": "BaseBdev2", 00:22:30.565 "uuid": "fcb01c4a-c67b-4646-9bcd-4b6bb0e63d50", 00:22:30.565 "is_configured": true, 00:22:30.565 "data_offset": 0, 00:22:30.565 "data_size": 65536 00:22:30.565 }, 00:22:30.565 { 00:22:30.565 "name": "BaseBdev3", 00:22:30.565 "uuid": "1f7a66d5-f02c-4b76-8c66-39999e365af7", 00:22:30.565 "is_configured": true, 00:22:30.565 "data_offset": 0, 00:22:30.565 "data_size": 65536 00:22:30.565 }, 00:22:30.565 { 00:22:30.565 "name": "BaseBdev4", 00:22:30.565 "uuid": "40b55ccc-5634-4f01-b875-8bba50bdf58d", 00:22:30.565 "is_configured": true, 00:22:30.565 "data_offset": 0, 00:22:30.565 "data_size": 65536 00:22:30.565 } 00:22:30.565 ] 00:22:30.565 }' 00:22:30.565 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:30.565 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.825 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:30.825 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.825 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.825 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.825 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 006c21b6-3fdf-49e0-b69e-b29067c93c3d 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.086 [2024-12-05 12:54:13.470954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:31.086 [2024-12-05 12:54:13.471113] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:31.086 [2024-12-05 12:54:13.471129] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:31.086 [2024-12-05 12:54:13.471385] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:22:31.086 [2024-12-05 12:54:13.471556] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:31.086 [2024-12-05 12:54:13.471565] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:22:31.086 [2024-12-05 12:54:13.471778] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:31.086 NewBaseBdev 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.086 [ 00:22:31.086 { 00:22:31.086 "name": "NewBaseBdev", 00:22:31.086 "aliases": [ 00:22:31.086 "006c21b6-3fdf-49e0-b69e-b29067c93c3d" 00:22:31.086 ], 00:22:31.086 "product_name": "Malloc disk", 00:22:31.086 "block_size": 512, 00:22:31.086 "num_blocks": 65536, 00:22:31.086 "uuid": "006c21b6-3fdf-49e0-b69e-b29067c93c3d", 00:22:31.086 "assigned_rate_limits": { 00:22:31.086 "rw_ios_per_sec": 0, 00:22:31.086 "rw_mbytes_per_sec": 0, 00:22:31.086 "r_mbytes_per_sec": 0, 00:22:31.086 "w_mbytes_per_sec": 0 00:22:31.086 }, 00:22:31.086 "claimed": true, 00:22:31.086 "claim_type": "exclusive_write", 00:22:31.086 "zoned": false, 00:22:31.086 "supported_io_types": { 00:22:31.086 "read": true, 00:22:31.086 "write": true, 00:22:31.086 "unmap": true, 00:22:31.086 "flush": true, 00:22:31.086 "reset": true, 00:22:31.086 "nvme_admin": false, 00:22:31.086 "nvme_io": false, 00:22:31.086 "nvme_io_md": false, 00:22:31.086 "write_zeroes": true, 00:22:31.086 "zcopy": true, 00:22:31.086 "get_zone_info": false, 00:22:31.086 "zone_management": false, 00:22:31.086 "zone_append": false, 00:22:31.086 "compare": false, 00:22:31.086 "compare_and_write": false, 00:22:31.086 "abort": true, 00:22:31.086 "seek_hole": false, 00:22:31.086 "seek_data": false, 00:22:31.086 "copy": true, 00:22:31.086 "nvme_iov_md": false 00:22:31.086 }, 00:22:31.086 "memory_domains": [ 00:22:31.086 { 00:22:31.086 "dma_device_id": "system", 00:22:31.086 "dma_device_type": 1 00:22:31.086 }, 00:22:31.086 { 00:22:31.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:31.086 "dma_device_type": 2 00:22:31.086 } 00:22:31.086 ], 00:22:31.086 "driver_specific": {} 00:22:31.086 } 00:22:31.086 ] 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.086 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:31.086 "name": "Existed_Raid", 00:22:31.086 "uuid": "32bff804-6e0a-4bef-8617-913c870286fc", 00:22:31.086 "strip_size_kb": 0, 00:22:31.086 "state": "online", 00:22:31.086 "raid_level": "raid1", 00:22:31.086 "superblock": false, 00:22:31.086 "num_base_bdevs": 4, 00:22:31.086 "num_base_bdevs_discovered": 4, 00:22:31.086 "num_base_bdevs_operational": 4, 00:22:31.086 "base_bdevs_list": [ 00:22:31.086 { 00:22:31.086 "name": "NewBaseBdev", 00:22:31.086 "uuid": "006c21b6-3fdf-49e0-b69e-b29067c93c3d", 00:22:31.086 "is_configured": true, 00:22:31.086 "data_offset": 0, 00:22:31.086 "data_size": 65536 00:22:31.086 }, 00:22:31.086 { 00:22:31.086 "name": "BaseBdev2", 00:22:31.086 "uuid": "fcb01c4a-c67b-4646-9bcd-4b6bb0e63d50", 00:22:31.086 "is_configured": true, 00:22:31.086 "data_offset": 0, 00:22:31.086 "data_size": 65536 00:22:31.086 }, 00:22:31.086 { 00:22:31.086 "name": "BaseBdev3", 00:22:31.086 "uuid": "1f7a66d5-f02c-4b76-8c66-39999e365af7", 00:22:31.086 "is_configured": true, 00:22:31.086 "data_offset": 0, 00:22:31.086 "data_size": 65536 00:22:31.086 }, 00:22:31.086 { 00:22:31.086 "name": "BaseBdev4", 00:22:31.086 "uuid": "40b55ccc-5634-4f01-b875-8bba50bdf58d", 00:22:31.086 "is_configured": true, 00:22:31.086 "data_offset": 0, 00:22:31.086 "data_size": 65536 00:22:31.086 } 00:22:31.086 ] 00:22:31.086 }' 00:22:31.087 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:31.087 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.358 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:22:31.358 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:31.358 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:31.358 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:31.358 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:31.358 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:31.358 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:31.358 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.358 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:31.358 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.358 [2024-12-05 12:54:13.815434] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:31.358 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.358 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:31.358 "name": "Existed_Raid", 00:22:31.358 "aliases": [ 00:22:31.358 "32bff804-6e0a-4bef-8617-913c870286fc" 00:22:31.358 ], 00:22:31.358 "product_name": "Raid Volume", 00:22:31.358 "block_size": 512, 00:22:31.358 "num_blocks": 65536, 00:22:31.359 "uuid": "32bff804-6e0a-4bef-8617-913c870286fc", 00:22:31.359 "assigned_rate_limits": { 00:22:31.359 "rw_ios_per_sec": 0, 00:22:31.359 "rw_mbytes_per_sec": 0, 00:22:31.359 "r_mbytes_per_sec": 0, 00:22:31.359 "w_mbytes_per_sec": 0 00:22:31.359 }, 00:22:31.359 "claimed": false, 00:22:31.359 "zoned": false, 00:22:31.359 "supported_io_types": { 00:22:31.359 "read": true, 00:22:31.359 "write": true, 00:22:31.359 "unmap": false, 00:22:31.359 "flush": false, 00:22:31.359 "reset": true, 00:22:31.359 "nvme_admin": false, 00:22:31.359 "nvme_io": false, 00:22:31.359 "nvme_io_md": false, 00:22:31.359 "write_zeroes": true, 00:22:31.359 "zcopy": false, 00:22:31.359 "get_zone_info": false, 00:22:31.359 "zone_management": false, 00:22:31.359 "zone_append": false, 00:22:31.359 "compare": false, 00:22:31.359 "compare_and_write": false, 00:22:31.359 "abort": false, 00:22:31.359 "seek_hole": false, 00:22:31.359 "seek_data": false, 00:22:31.359 "copy": false, 00:22:31.359 "nvme_iov_md": false 00:22:31.359 }, 00:22:31.359 "memory_domains": [ 00:22:31.359 { 00:22:31.359 "dma_device_id": "system", 00:22:31.359 "dma_device_type": 1 00:22:31.359 }, 00:22:31.359 { 00:22:31.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:31.359 "dma_device_type": 2 00:22:31.359 }, 00:22:31.359 { 00:22:31.359 "dma_device_id": "system", 00:22:31.359 "dma_device_type": 1 00:22:31.359 }, 00:22:31.359 { 00:22:31.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:31.359 "dma_device_type": 2 00:22:31.359 }, 00:22:31.359 { 00:22:31.359 "dma_device_id": "system", 00:22:31.359 "dma_device_type": 1 00:22:31.359 }, 00:22:31.359 { 00:22:31.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:31.359 "dma_device_type": 2 00:22:31.359 }, 00:22:31.359 { 00:22:31.359 "dma_device_id": "system", 00:22:31.359 "dma_device_type": 1 00:22:31.359 }, 00:22:31.359 { 00:22:31.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:31.359 "dma_device_type": 2 00:22:31.359 } 00:22:31.359 ], 00:22:31.359 "driver_specific": { 00:22:31.359 "raid": { 00:22:31.359 "uuid": "32bff804-6e0a-4bef-8617-913c870286fc", 00:22:31.359 "strip_size_kb": 0, 00:22:31.359 "state": "online", 00:22:31.359 "raid_level": "raid1", 00:22:31.359 "superblock": false, 00:22:31.359 "num_base_bdevs": 4, 00:22:31.359 "num_base_bdevs_discovered": 4, 00:22:31.359 "num_base_bdevs_operational": 4, 00:22:31.359 "base_bdevs_list": [ 00:22:31.359 { 00:22:31.359 "name": "NewBaseBdev", 00:22:31.359 "uuid": "006c21b6-3fdf-49e0-b69e-b29067c93c3d", 00:22:31.359 "is_configured": true, 00:22:31.359 "data_offset": 0, 00:22:31.359 "data_size": 65536 00:22:31.359 }, 00:22:31.359 { 00:22:31.359 "name": "BaseBdev2", 00:22:31.359 "uuid": "fcb01c4a-c67b-4646-9bcd-4b6bb0e63d50", 00:22:31.359 "is_configured": true, 00:22:31.359 "data_offset": 0, 00:22:31.359 "data_size": 65536 00:22:31.359 }, 00:22:31.359 { 00:22:31.359 "name": "BaseBdev3", 00:22:31.359 "uuid": "1f7a66d5-f02c-4b76-8c66-39999e365af7", 00:22:31.359 "is_configured": true, 00:22:31.359 "data_offset": 0, 00:22:31.359 "data_size": 65536 00:22:31.359 }, 00:22:31.359 { 00:22:31.359 "name": "BaseBdev4", 00:22:31.359 "uuid": "40b55ccc-5634-4f01-b875-8bba50bdf58d", 00:22:31.359 "is_configured": true, 00:22:31.359 "data_offset": 0, 00:22:31.359 "data_size": 65536 00:22:31.359 } 00:22:31.359 ] 00:22:31.359 } 00:22:31.359 } 00:22:31.359 }' 00:22:31.359 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:31.359 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:22:31.359 BaseBdev2 00:22:31.359 BaseBdev3 00:22:31.359 BaseBdev4' 00:22:31.359 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:31.359 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:31.359 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:31.359 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:22:31.359 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.359 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.359 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:31.359 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.621 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:31.621 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:31.621 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:31.621 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:31.621 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:31.621 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.621 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.621 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.621 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:31.621 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:31.621 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:31.621 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:31.621 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.621 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.621 12:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:31.621 12:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.621 12:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:31.621 12:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:31.621 12:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:31.621 12:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:22:31.621 12:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:31.621 12:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.621 12:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.621 12:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.621 12:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:31.621 12:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:31.621 12:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:31.621 12:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.621 12:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.621 [2024-12-05 12:54:14.043129] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:31.621 [2024-12-05 12:54:14.043152] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:31.621 [2024-12-05 12:54:14.043219] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:31.621 [2024-12-05 12:54:14.043505] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:31.621 [2024-12-05 12:54:14.043518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:22:31.621 12:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.621 12:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71085 00:22:31.621 12:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71085 ']' 00:22:31.621 12:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71085 00:22:31.621 12:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:22:31.621 12:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:31.621 12:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71085 00:22:31.621 killing process with pid 71085 00:22:31.621 12:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:31.621 12:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:31.621 12:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71085' 00:22:31.621 12:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71085 00:22:31.621 [2024-12-05 12:54:14.072672] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:31.621 12:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71085 00:22:31.882 [2024-12-05 12:54:14.317053] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:32.823 12:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:22:32.823 00:22:32.823 real 0m8.334s 00:22:32.823 user 0m13.324s 00:22:32.823 sys 0m1.290s 00:22:32.823 ************************************ 00:22:32.823 END TEST raid_state_function_test 00:22:32.823 ************************************ 00:22:32.823 12:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:32.823 12:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.823 12:54:15 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:22:32.823 12:54:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:32.823 12:54:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:32.823 12:54:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:32.823 ************************************ 00:22:32.823 START TEST raid_state_function_test_sb 00:22:32.823 ************************************ 00:22:32.823 12:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:22:32.823 12:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:22:32.823 12:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:22:32.823 12:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:22:32.823 12:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:32.823 12:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:32.823 12:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:32.823 12:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:32.823 12:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:32.823 12:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:32.823 12:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:32.823 12:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:32.823 12:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:32.823 12:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:22:32.823 12:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:32.823 12:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:32.823 12:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:22:32.823 12:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:32.823 12:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:32.823 12:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:32.823 12:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:32.824 12:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:32.824 12:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:32.824 12:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:32.824 12:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:32.824 12:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:22:32.824 12:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:22:32.824 12:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:22:32.824 12:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:22:32.824 Process raid pid: 71724 00:22:32.824 12:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71724 00:22:32.824 12:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71724' 00:22:32.824 12:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71724 00:22:32.824 12:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 71724 ']' 00:22:32.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.824 12:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.824 12:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:32.824 12:54:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:32.824 12:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.824 12:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:32.824 12:54:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:32.824 [2024-12-05 12:54:15.141998] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:22:32.824 [2024-12-05 12:54:15.142397] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.824 [2024-12-05 12:54:15.302581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.824 [2024-12-05 12:54:15.403287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.084 [2024-12-05 12:54:15.539708] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:33.084 [2024-12-05 12:54:15.539740] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:33.651 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:33.651 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:22:33.651 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:33.651 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.651 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.651 [2024-12-05 12:54:16.018294] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:33.651 [2024-12-05 12:54:16.018343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:33.651 [2024-12-05 12:54:16.018353] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:33.651 [2024-12-05 12:54:16.018363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:33.651 [2024-12-05 12:54:16.018370] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:33.651 [2024-12-05 12:54:16.018378] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:33.651 [2024-12-05 12:54:16.018384] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:33.651 [2024-12-05 12:54:16.018392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:33.651 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.651 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:33.651 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:33.651 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:33.651 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:33.651 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:33.651 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:33.651 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:33.651 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:33.651 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:33.651 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:33.651 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:33.651 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:33.651 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.651 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.651 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.651 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:33.651 "name": "Existed_Raid", 00:22:33.651 "uuid": "6363c852-bff1-4959-aa22-64d03776f279", 00:22:33.651 "strip_size_kb": 0, 00:22:33.651 "state": "configuring", 00:22:33.651 "raid_level": "raid1", 00:22:33.651 "superblock": true, 00:22:33.651 "num_base_bdevs": 4, 00:22:33.651 "num_base_bdevs_discovered": 0, 00:22:33.651 "num_base_bdevs_operational": 4, 00:22:33.651 "base_bdevs_list": [ 00:22:33.651 { 00:22:33.651 "name": "BaseBdev1", 00:22:33.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.651 "is_configured": false, 00:22:33.651 "data_offset": 0, 00:22:33.651 "data_size": 0 00:22:33.651 }, 00:22:33.651 { 00:22:33.651 "name": "BaseBdev2", 00:22:33.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.651 "is_configured": false, 00:22:33.651 "data_offset": 0, 00:22:33.651 "data_size": 0 00:22:33.651 }, 00:22:33.651 { 00:22:33.651 "name": "BaseBdev3", 00:22:33.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.651 "is_configured": false, 00:22:33.651 "data_offset": 0, 00:22:33.651 "data_size": 0 00:22:33.651 }, 00:22:33.651 { 00:22:33.651 "name": "BaseBdev4", 00:22:33.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.651 "is_configured": false, 00:22:33.651 "data_offset": 0, 00:22:33.651 "data_size": 0 00:22:33.651 } 00:22:33.651 ] 00:22:33.651 }' 00:22:33.651 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:33.651 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.909 [2024-12-05 12:54:16.338309] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:33.909 [2024-12-05 12:54:16.338464] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.909 [2024-12-05 12:54:16.346327] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:33.909 [2024-12-05 12:54:16.346446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:33.909 [2024-12-05 12:54:16.346518] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:33.909 [2024-12-05 12:54:16.346546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:33.909 [2024-12-05 12:54:16.346720] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:33.909 [2024-12-05 12:54:16.346747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:33.909 [2024-12-05 12:54:16.346765] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:33.909 [2024-12-05 12:54:16.346785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.909 [2024-12-05 12:54:16.379324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:33.909 BaseBdev1 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.909 [ 00:22:33.909 { 00:22:33.909 "name": "BaseBdev1", 00:22:33.909 "aliases": [ 00:22:33.909 "bcdb36f6-ccbd-4353-baff-bcbb58af9014" 00:22:33.909 ], 00:22:33.909 "product_name": "Malloc disk", 00:22:33.909 "block_size": 512, 00:22:33.909 "num_blocks": 65536, 00:22:33.909 "uuid": "bcdb36f6-ccbd-4353-baff-bcbb58af9014", 00:22:33.909 "assigned_rate_limits": { 00:22:33.909 "rw_ios_per_sec": 0, 00:22:33.909 "rw_mbytes_per_sec": 0, 00:22:33.909 "r_mbytes_per_sec": 0, 00:22:33.909 "w_mbytes_per_sec": 0 00:22:33.909 }, 00:22:33.909 "claimed": true, 00:22:33.909 "claim_type": "exclusive_write", 00:22:33.909 "zoned": false, 00:22:33.909 "supported_io_types": { 00:22:33.909 "read": true, 00:22:33.909 "write": true, 00:22:33.909 "unmap": true, 00:22:33.909 "flush": true, 00:22:33.909 "reset": true, 00:22:33.909 "nvme_admin": false, 00:22:33.909 "nvme_io": false, 00:22:33.909 "nvme_io_md": false, 00:22:33.909 "write_zeroes": true, 00:22:33.909 "zcopy": true, 00:22:33.909 "get_zone_info": false, 00:22:33.909 "zone_management": false, 00:22:33.909 "zone_append": false, 00:22:33.909 "compare": false, 00:22:33.909 "compare_and_write": false, 00:22:33.909 "abort": true, 00:22:33.909 "seek_hole": false, 00:22:33.909 "seek_data": false, 00:22:33.909 "copy": true, 00:22:33.909 "nvme_iov_md": false 00:22:33.909 }, 00:22:33.909 "memory_domains": [ 00:22:33.909 { 00:22:33.909 "dma_device_id": "system", 00:22:33.909 "dma_device_type": 1 00:22:33.909 }, 00:22:33.909 { 00:22:33.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:33.909 "dma_device_type": 2 00:22:33.909 } 00:22:33.909 ], 00:22:33.909 "driver_specific": {} 00:22:33.909 } 00:22:33.909 ] 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.909 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:33.909 "name": "Existed_Raid", 00:22:33.909 "uuid": "7d47c007-6cb5-4bf8-98ca-5451e8ba4896", 00:22:33.909 "strip_size_kb": 0, 00:22:33.909 "state": "configuring", 00:22:33.909 "raid_level": "raid1", 00:22:33.909 "superblock": true, 00:22:33.909 "num_base_bdevs": 4, 00:22:33.909 "num_base_bdevs_discovered": 1, 00:22:33.909 "num_base_bdevs_operational": 4, 00:22:33.909 "base_bdevs_list": [ 00:22:33.909 { 00:22:33.909 "name": "BaseBdev1", 00:22:33.910 "uuid": "bcdb36f6-ccbd-4353-baff-bcbb58af9014", 00:22:33.910 "is_configured": true, 00:22:33.910 "data_offset": 2048, 00:22:33.910 "data_size": 63488 00:22:33.910 }, 00:22:33.910 { 00:22:33.910 "name": "BaseBdev2", 00:22:33.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.910 "is_configured": false, 00:22:33.910 "data_offset": 0, 00:22:33.910 "data_size": 0 00:22:33.910 }, 00:22:33.910 { 00:22:33.910 "name": "BaseBdev3", 00:22:33.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.910 "is_configured": false, 00:22:33.910 "data_offset": 0, 00:22:33.910 "data_size": 0 00:22:33.910 }, 00:22:33.910 { 00:22:33.910 "name": "BaseBdev4", 00:22:33.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.910 "is_configured": false, 00:22:33.910 "data_offset": 0, 00:22:33.910 "data_size": 0 00:22:33.910 } 00:22:33.910 ] 00:22:33.910 }' 00:22:33.910 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:33.910 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.168 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:34.168 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.168 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.168 [2024-12-05 12:54:16.731448] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:34.168 [2024-12-05 12:54:16.731606] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:34.168 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.168 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:34.168 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.168 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.168 [2024-12-05 12:54:16.739521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:34.168 [2024-12-05 12:54:16.741410] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:34.168 [2024-12-05 12:54:16.741446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:34.168 [2024-12-05 12:54:16.741455] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:34.168 [2024-12-05 12:54:16.741466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:34.168 [2024-12-05 12:54:16.741472] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:34.168 [2024-12-05 12:54:16.741481] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:34.168 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.168 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:34.168 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:34.168 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:34.168 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:34.168 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:34.168 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:34.168 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:34.168 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:34.168 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:34.168 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:34.168 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:34.168 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:34.168 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:34.168 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:34.168 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.168 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.425 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.425 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:34.425 "name": "Existed_Raid", 00:22:34.425 "uuid": "a2cc6e4d-d138-4a54-9d0b-12f659e8b919", 00:22:34.425 "strip_size_kb": 0, 00:22:34.425 "state": "configuring", 00:22:34.425 "raid_level": "raid1", 00:22:34.425 "superblock": true, 00:22:34.425 "num_base_bdevs": 4, 00:22:34.425 "num_base_bdevs_discovered": 1, 00:22:34.425 "num_base_bdevs_operational": 4, 00:22:34.425 "base_bdevs_list": [ 00:22:34.425 { 00:22:34.425 "name": "BaseBdev1", 00:22:34.425 "uuid": "bcdb36f6-ccbd-4353-baff-bcbb58af9014", 00:22:34.425 "is_configured": true, 00:22:34.425 "data_offset": 2048, 00:22:34.425 "data_size": 63488 00:22:34.425 }, 00:22:34.425 { 00:22:34.425 "name": "BaseBdev2", 00:22:34.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.425 "is_configured": false, 00:22:34.425 "data_offset": 0, 00:22:34.425 "data_size": 0 00:22:34.425 }, 00:22:34.425 { 00:22:34.425 "name": "BaseBdev3", 00:22:34.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.425 "is_configured": false, 00:22:34.425 "data_offset": 0, 00:22:34.425 "data_size": 0 00:22:34.425 }, 00:22:34.425 { 00:22:34.425 "name": "BaseBdev4", 00:22:34.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.425 "is_configured": false, 00:22:34.425 "data_offset": 0, 00:22:34.425 "data_size": 0 00:22:34.425 } 00:22:34.425 ] 00:22:34.425 }' 00:22:34.425 12:54:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:34.425 12:54:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.683 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:34.683 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.683 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.683 [2024-12-05 12:54:17.069926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:34.683 BaseBdev2 00:22:34.683 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.683 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:34.683 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:34.683 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:34.683 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:34.683 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:34.683 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:34.683 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:34.683 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.683 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.683 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.683 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:34.683 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.683 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.683 [ 00:22:34.683 { 00:22:34.683 "name": "BaseBdev2", 00:22:34.683 "aliases": [ 00:22:34.683 "42cdd75a-7f87-45fb-86a4-549386fcdc72" 00:22:34.683 ], 00:22:34.683 "product_name": "Malloc disk", 00:22:34.683 "block_size": 512, 00:22:34.683 "num_blocks": 65536, 00:22:34.683 "uuid": "42cdd75a-7f87-45fb-86a4-549386fcdc72", 00:22:34.683 "assigned_rate_limits": { 00:22:34.683 "rw_ios_per_sec": 0, 00:22:34.683 "rw_mbytes_per_sec": 0, 00:22:34.683 "r_mbytes_per_sec": 0, 00:22:34.683 "w_mbytes_per_sec": 0 00:22:34.683 }, 00:22:34.683 "claimed": true, 00:22:34.683 "claim_type": "exclusive_write", 00:22:34.683 "zoned": false, 00:22:34.683 "supported_io_types": { 00:22:34.683 "read": true, 00:22:34.683 "write": true, 00:22:34.683 "unmap": true, 00:22:34.683 "flush": true, 00:22:34.683 "reset": true, 00:22:34.683 "nvme_admin": false, 00:22:34.683 "nvme_io": false, 00:22:34.683 "nvme_io_md": false, 00:22:34.683 "write_zeroes": true, 00:22:34.683 "zcopy": true, 00:22:34.683 "get_zone_info": false, 00:22:34.683 "zone_management": false, 00:22:34.683 "zone_append": false, 00:22:34.683 "compare": false, 00:22:34.683 "compare_and_write": false, 00:22:34.683 "abort": true, 00:22:34.683 "seek_hole": false, 00:22:34.683 "seek_data": false, 00:22:34.683 "copy": true, 00:22:34.683 "nvme_iov_md": false 00:22:34.683 }, 00:22:34.683 "memory_domains": [ 00:22:34.683 { 00:22:34.683 "dma_device_id": "system", 00:22:34.683 "dma_device_type": 1 00:22:34.683 }, 00:22:34.683 { 00:22:34.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.683 "dma_device_type": 2 00:22:34.683 } 00:22:34.683 ], 00:22:34.683 "driver_specific": {} 00:22:34.683 } 00:22:34.683 ] 00:22:34.683 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.683 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:34.683 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:34.683 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:34.683 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:34.683 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:34.683 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:34.683 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:34.683 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:34.683 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:34.683 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:34.683 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:34.683 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:34.683 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:34.683 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:34.683 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:34.683 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.683 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.683 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.683 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:34.683 "name": "Existed_Raid", 00:22:34.683 "uuid": "a2cc6e4d-d138-4a54-9d0b-12f659e8b919", 00:22:34.683 "strip_size_kb": 0, 00:22:34.683 "state": "configuring", 00:22:34.683 "raid_level": "raid1", 00:22:34.683 "superblock": true, 00:22:34.683 "num_base_bdevs": 4, 00:22:34.683 "num_base_bdevs_discovered": 2, 00:22:34.683 "num_base_bdevs_operational": 4, 00:22:34.683 "base_bdevs_list": [ 00:22:34.683 { 00:22:34.683 "name": "BaseBdev1", 00:22:34.683 "uuid": "bcdb36f6-ccbd-4353-baff-bcbb58af9014", 00:22:34.683 "is_configured": true, 00:22:34.683 "data_offset": 2048, 00:22:34.683 "data_size": 63488 00:22:34.683 }, 00:22:34.683 { 00:22:34.683 "name": "BaseBdev2", 00:22:34.683 "uuid": "42cdd75a-7f87-45fb-86a4-549386fcdc72", 00:22:34.683 "is_configured": true, 00:22:34.683 "data_offset": 2048, 00:22:34.683 "data_size": 63488 00:22:34.683 }, 00:22:34.683 { 00:22:34.683 "name": "BaseBdev3", 00:22:34.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.683 "is_configured": false, 00:22:34.683 "data_offset": 0, 00:22:34.683 "data_size": 0 00:22:34.683 }, 00:22:34.683 { 00:22:34.683 "name": "BaseBdev4", 00:22:34.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.683 "is_configured": false, 00:22:34.683 "data_offset": 0, 00:22:34.683 "data_size": 0 00:22:34.683 } 00:22:34.683 ] 00:22:34.683 }' 00:22:34.683 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:34.684 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.941 [2024-12-05 12:54:17.441717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:34.941 BaseBdev3 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.941 [ 00:22:34.941 { 00:22:34.941 "name": "BaseBdev3", 00:22:34.941 "aliases": [ 00:22:34.941 "f4d07b21-6f03-451c-85c4-db21db44eac2" 00:22:34.941 ], 00:22:34.941 "product_name": "Malloc disk", 00:22:34.941 "block_size": 512, 00:22:34.941 "num_blocks": 65536, 00:22:34.941 "uuid": "f4d07b21-6f03-451c-85c4-db21db44eac2", 00:22:34.941 "assigned_rate_limits": { 00:22:34.941 "rw_ios_per_sec": 0, 00:22:34.941 "rw_mbytes_per_sec": 0, 00:22:34.941 "r_mbytes_per_sec": 0, 00:22:34.941 "w_mbytes_per_sec": 0 00:22:34.941 }, 00:22:34.941 "claimed": true, 00:22:34.941 "claim_type": "exclusive_write", 00:22:34.941 "zoned": false, 00:22:34.941 "supported_io_types": { 00:22:34.941 "read": true, 00:22:34.941 "write": true, 00:22:34.941 "unmap": true, 00:22:34.941 "flush": true, 00:22:34.941 "reset": true, 00:22:34.941 "nvme_admin": false, 00:22:34.941 "nvme_io": false, 00:22:34.941 "nvme_io_md": false, 00:22:34.941 "write_zeroes": true, 00:22:34.941 "zcopy": true, 00:22:34.941 "get_zone_info": false, 00:22:34.941 "zone_management": false, 00:22:34.941 "zone_append": false, 00:22:34.941 "compare": false, 00:22:34.941 "compare_and_write": false, 00:22:34.941 "abort": true, 00:22:34.941 "seek_hole": false, 00:22:34.941 "seek_data": false, 00:22:34.941 "copy": true, 00:22:34.941 "nvme_iov_md": false 00:22:34.941 }, 00:22:34.941 "memory_domains": [ 00:22:34.941 { 00:22:34.941 "dma_device_id": "system", 00:22:34.941 "dma_device_type": 1 00:22:34.941 }, 00:22:34.941 { 00:22:34.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.941 "dma_device_type": 2 00:22:34.941 } 00:22:34.941 ], 00:22:34.941 "driver_specific": {} 00:22:34.941 } 00:22:34.941 ] 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:34.941 "name": "Existed_Raid", 00:22:34.941 "uuid": "a2cc6e4d-d138-4a54-9d0b-12f659e8b919", 00:22:34.941 "strip_size_kb": 0, 00:22:34.941 "state": "configuring", 00:22:34.941 "raid_level": "raid1", 00:22:34.941 "superblock": true, 00:22:34.941 "num_base_bdevs": 4, 00:22:34.941 "num_base_bdevs_discovered": 3, 00:22:34.941 "num_base_bdevs_operational": 4, 00:22:34.941 "base_bdevs_list": [ 00:22:34.941 { 00:22:34.941 "name": "BaseBdev1", 00:22:34.941 "uuid": "bcdb36f6-ccbd-4353-baff-bcbb58af9014", 00:22:34.941 "is_configured": true, 00:22:34.941 "data_offset": 2048, 00:22:34.941 "data_size": 63488 00:22:34.941 }, 00:22:34.941 { 00:22:34.941 "name": "BaseBdev2", 00:22:34.941 "uuid": "42cdd75a-7f87-45fb-86a4-549386fcdc72", 00:22:34.941 "is_configured": true, 00:22:34.941 "data_offset": 2048, 00:22:34.941 "data_size": 63488 00:22:34.941 }, 00:22:34.941 { 00:22:34.941 "name": "BaseBdev3", 00:22:34.941 "uuid": "f4d07b21-6f03-451c-85c4-db21db44eac2", 00:22:34.941 "is_configured": true, 00:22:34.941 "data_offset": 2048, 00:22:34.941 "data_size": 63488 00:22:34.941 }, 00:22:34.941 { 00:22:34.941 "name": "BaseBdev4", 00:22:34.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.941 "is_configured": false, 00:22:34.941 "data_offset": 0, 00:22:34.941 "data_size": 0 00:22:34.941 } 00:22:34.941 ] 00:22:34.941 }' 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:34.941 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.506 [2024-12-05 12:54:17.836315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:35.506 BaseBdev4 00:22:35.506 [2024-12-05 12:54:17.836656] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:35.506 [2024-12-05 12:54:17.836672] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:35.506 [2024-12-05 12:54:17.836897] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:35.506 [2024-12-05 12:54:17.837014] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:35.506 [2024-12-05 12:54:17.837023] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:35.506 [2024-12-05 12:54:17.837130] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.506 [ 00:22:35.506 { 00:22:35.506 "name": "BaseBdev4", 00:22:35.506 "aliases": [ 00:22:35.506 "cda36e02-1f8d-42e4-a1b8-12b57041e9e7" 00:22:35.506 ], 00:22:35.506 "product_name": "Malloc disk", 00:22:35.506 "block_size": 512, 00:22:35.506 "num_blocks": 65536, 00:22:35.506 "uuid": "cda36e02-1f8d-42e4-a1b8-12b57041e9e7", 00:22:35.506 "assigned_rate_limits": { 00:22:35.506 "rw_ios_per_sec": 0, 00:22:35.506 "rw_mbytes_per_sec": 0, 00:22:35.506 "r_mbytes_per_sec": 0, 00:22:35.506 "w_mbytes_per_sec": 0 00:22:35.506 }, 00:22:35.506 "claimed": true, 00:22:35.506 "claim_type": "exclusive_write", 00:22:35.506 "zoned": false, 00:22:35.506 "supported_io_types": { 00:22:35.506 "read": true, 00:22:35.506 "write": true, 00:22:35.506 "unmap": true, 00:22:35.506 "flush": true, 00:22:35.506 "reset": true, 00:22:35.506 "nvme_admin": false, 00:22:35.506 "nvme_io": false, 00:22:35.506 "nvme_io_md": false, 00:22:35.506 "write_zeroes": true, 00:22:35.506 "zcopy": true, 00:22:35.506 "get_zone_info": false, 00:22:35.506 "zone_management": false, 00:22:35.506 "zone_append": false, 00:22:35.506 "compare": false, 00:22:35.506 "compare_and_write": false, 00:22:35.506 "abort": true, 00:22:35.506 "seek_hole": false, 00:22:35.506 "seek_data": false, 00:22:35.506 "copy": true, 00:22:35.506 "nvme_iov_md": false 00:22:35.506 }, 00:22:35.506 "memory_domains": [ 00:22:35.506 { 00:22:35.506 "dma_device_id": "system", 00:22:35.506 "dma_device_type": 1 00:22:35.506 }, 00:22:35.506 { 00:22:35.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:35.506 "dma_device_type": 2 00:22:35.506 } 00:22:35.506 ], 00:22:35.506 "driver_specific": {} 00:22:35.506 } 00:22:35.506 ] 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:35.506 "name": "Existed_Raid", 00:22:35.506 "uuid": "a2cc6e4d-d138-4a54-9d0b-12f659e8b919", 00:22:35.506 "strip_size_kb": 0, 00:22:35.506 "state": "online", 00:22:35.506 "raid_level": "raid1", 00:22:35.506 "superblock": true, 00:22:35.506 "num_base_bdevs": 4, 00:22:35.506 "num_base_bdevs_discovered": 4, 00:22:35.506 "num_base_bdevs_operational": 4, 00:22:35.506 "base_bdevs_list": [ 00:22:35.506 { 00:22:35.506 "name": "BaseBdev1", 00:22:35.506 "uuid": "bcdb36f6-ccbd-4353-baff-bcbb58af9014", 00:22:35.506 "is_configured": true, 00:22:35.506 "data_offset": 2048, 00:22:35.506 "data_size": 63488 00:22:35.506 }, 00:22:35.506 { 00:22:35.506 "name": "BaseBdev2", 00:22:35.506 "uuid": "42cdd75a-7f87-45fb-86a4-549386fcdc72", 00:22:35.506 "is_configured": true, 00:22:35.506 "data_offset": 2048, 00:22:35.506 "data_size": 63488 00:22:35.506 }, 00:22:35.506 { 00:22:35.506 "name": "BaseBdev3", 00:22:35.506 "uuid": "f4d07b21-6f03-451c-85c4-db21db44eac2", 00:22:35.506 "is_configured": true, 00:22:35.506 "data_offset": 2048, 00:22:35.506 "data_size": 63488 00:22:35.506 }, 00:22:35.506 { 00:22:35.506 "name": "BaseBdev4", 00:22:35.506 "uuid": "cda36e02-1f8d-42e4-a1b8-12b57041e9e7", 00:22:35.506 "is_configured": true, 00:22:35.506 "data_offset": 2048, 00:22:35.506 "data_size": 63488 00:22:35.506 } 00:22:35.506 ] 00:22:35.506 }' 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:35.506 12:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.764 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:35.765 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:35.765 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:35.765 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:35.765 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:22:35.765 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:35.765 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:35.765 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:35.765 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.765 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.765 [2024-12-05 12:54:18.184749] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:35.765 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.765 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:35.765 "name": "Existed_Raid", 00:22:35.765 "aliases": [ 00:22:35.765 "a2cc6e4d-d138-4a54-9d0b-12f659e8b919" 00:22:35.765 ], 00:22:35.765 "product_name": "Raid Volume", 00:22:35.765 "block_size": 512, 00:22:35.765 "num_blocks": 63488, 00:22:35.765 "uuid": "a2cc6e4d-d138-4a54-9d0b-12f659e8b919", 00:22:35.765 "assigned_rate_limits": { 00:22:35.765 "rw_ios_per_sec": 0, 00:22:35.765 "rw_mbytes_per_sec": 0, 00:22:35.765 "r_mbytes_per_sec": 0, 00:22:35.765 "w_mbytes_per_sec": 0 00:22:35.765 }, 00:22:35.765 "claimed": false, 00:22:35.765 "zoned": false, 00:22:35.765 "supported_io_types": { 00:22:35.765 "read": true, 00:22:35.765 "write": true, 00:22:35.765 "unmap": false, 00:22:35.765 "flush": false, 00:22:35.765 "reset": true, 00:22:35.765 "nvme_admin": false, 00:22:35.765 "nvme_io": false, 00:22:35.765 "nvme_io_md": false, 00:22:35.765 "write_zeroes": true, 00:22:35.765 "zcopy": false, 00:22:35.765 "get_zone_info": false, 00:22:35.765 "zone_management": false, 00:22:35.765 "zone_append": false, 00:22:35.765 "compare": false, 00:22:35.765 "compare_and_write": false, 00:22:35.765 "abort": false, 00:22:35.765 "seek_hole": false, 00:22:35.765 "seek_data": false, 00:22:35.765 "copy": false, 00:22:35.765 "nvme_iov_md": false 00:22:35.765 }, 00:22:35.765 "memory_domains": [ 00:22:35.765 { 00:22:35.765 "dma_device_id": "system", 00:22:35.765 "dma_device_type": 1 00:22:35.765 }, 00:22:35.765 { 00:22:35.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:35.765 "dma_device_type": 2 00:22:35.765 }, 00:22:35.765 { 00:22:35.765 "dma_device_id": "system", 00:22:35.765 "dma_device_type": 1 00:22:35.765 }, 00:22:35.765 { 00:22:35.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:35.765 "dma_device_type": 2 00:22:35.765 }, 00:22:35.765 { 00:22:35.765 "dma_device_id": "system", 00:22:35.765 "dma_device_type": 1 00:22:35.765 }, 00:22:35.765 { 00:22:35.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:35.765 "dma_device_type": 2 00:22:35.765 }, 00:22:35.765 { 00:22:35.765 "dma_device_id": "system", 00:22:35.765 "dma_device_type": 1 00:22:35.765 }, 00:22:35.765 { 00:22:35.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:35.765 "dma_device_type": 2 00:22:35.765 } 00:22:35.765 ], 00:22:35.765 "driver_specific": { 00:22:35.765 "raid": { 00:22:35.765 "uuid": "a2cc6e4d-d138-4a54-9d0b-12f659e8b919", 00:22:35.765 "strip_size_kb": 0, 00:22:35.765 "state": "online", 00:22:35.765 "raid_level": "raid1", 00:22:35.765 "superblock": true, 00:22:35.765 "num_base_bdevs": 4, 00:22:35.765 "num_base_bdevs_discovered": 4, 00:22:35.765 "num_base_bdevs_operational": 4, 00:22:35.765 "base_bdevs_list": [ 00:22:35.765 { 00:22:35.765 "name": "BaseBdev1", 00:22:35.765 "uuid": "bcdb36f6-ccbd-4353-baff-bcbb58af9014", 00:22:35.765 "is_configured": true, 00:22:35.765 "data_offset": 2048, 00:22:35.765 "data_size": 63488 00:22:35.765 }, 00:22:35.765 { 00:22:35.765 "name": "BaseBdev2", 00:22:35.765 "uuid": "42cdd75a-7f87-45fb-86a4-549386fcdc72", 00:22:35.765 "is_configured": true, 00:22:35.765 "data_offset": 2048, 00:22:35.765 "data_size": 63488 00:22:35.765 }, 00:22:35.765 { 00:22:35.765 "name": "BaseBdev3", 00:22:35.765 "uuid": "f4d07b21-6f03-451c-85c4-db21db44eac2", 00:22:35.765 "is_configured": true, 00:22:35.765 "data_offset": 2048, 00:22:35.765 "data_size": 63488 00:22:35.765 }, 00:22:35.765 { 00:22:35.765 "name": "BaseBdev4", 00:22:35.765 "uuid": "cda36e02-1f8d-42e4-a1b8-12b57041e9e7", 00:22:35.765 "is_configured": true, 00:22:35.765 "data_offset": 2048, 00:22:35.765 "data_size": 63488 00:22:35.765 } 00:22:35.765 ] 00:22:35.765 } 00:22:35.765 } 00:22:35.765 }' 00:22:35.765 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:35.765 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:35.765 BaseBdev2 00:22:35.765 BaseBdev3 00:22:35.765 BaseBdev4' 00:22:35.765 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:35.765 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:35.765 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:35.765 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:35.765 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.765 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.765 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:35.765 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.765 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:35.765 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:35.765 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:35.765 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:35.765 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:35.765 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.765 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.765 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.765 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:35.765 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:35.765 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:35.765 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:35.765 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:35.765 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.765 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.765 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.022 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:36.022 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:36.022 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:36.022 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:22:36.022 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.022 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.022 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:36.022 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.023 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:36.023 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:36.023 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:36.023 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.023 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.023 [2024-12-05 12:54:18.404534] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:36.023 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.023 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:36.023 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:22:36.023 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:36.023 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:22:36.023 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:22:36.023 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:22:36.023 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:36.023 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:36.023 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:36.023 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:36.023 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:36.023 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:36.023 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:36.023 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:36.023 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:36.023 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:36.023 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:36.023 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.023 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.023 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.023 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:36.023 "name": "Existed_Raid", 00:22:36.023 "uuid": "a2cc6e4d-d138-4a54-9d0b-12f659e8b919", 00:22:36.023 "strip_size_kb": 0, 00:22:36.023 "state": "online", 00:22:36.023 "raid_level": "raid1", 00:22:36.023 "superblock": true, 00:22:36.023 "num_base_bdevs": 4, 00:22:36.023 "num_base_bdevs_discovered": 3, 00:22:36.023 "num_base_bdevs_operational": 3, 00:22:36.023 "base_bdevs_list": [ 00:22:36.023 { 00:22:36.023 "name": null, 00:22:36.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:36.023 "is_configured": false, 00:22:36.023 "data_offset": 0, 00:22:36.023 "data_size": 63488 00:22:36.023 }, 00:22:36.023 { 00:22:36.023 "name": "BaseBdev2", 00:22:36.023 "uuid": "42cdd75a-7f87-45fb-86a4-549386fcdc72", 00:22:36.023 "is_configured": true, 00:22:36.023 "data_offset": 2048, 00:22:36.023 "data_size": 63488 00:22:36.023 }, 00:22:36.023 { 00:22:36.023 "name": "BaseBdev3", 00:22:36.023 "uuid": "f4d07b21-6f03-451c-85c4-db21db44eac2", 00:22:36.023 "is_configured": true, 00:22:36.023 "data_offset": 2048, 00:22:36.023 "data_size": 63488 00:22:36.023 }, 00:22:36.023 { 00:22:36.023 "name": "BaseBdev4", 00:22:36.023 "uuid": "cda36e02-1f8d-42e4-a1b8-12b57041e9e7", 00:22:36.023 "is_configured": true, 00:22:36.023 "data_offset": 2048, 00:22:36.023 "data_size": 63488 00:22:36.023 } 00:22:36.023 ] 00:22:36.023 }' 00:22:36.023 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:36.023 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.280 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:36.280 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:36.280 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:36.280 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:36.280 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.280 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.280 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.280 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:36.280 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:36.280 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:36.280 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.280 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.280 [2024-12-05 12:54:18.790331] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:36.280 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.280 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:36.280 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:36.280 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:36.280 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.280 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.280 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:36.280 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.538 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:36.538 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:36.538 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:22:36.538 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.538 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.538 [2024-12-05 12:54:18.877858] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:36.538 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.538 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:36.538 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:36.538 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:36.538 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:36.538 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.538 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.538 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.538 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:36.538 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:36.538 12:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:22:36.538 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.538 12:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.538 [2024-12-05 12:54:18.965428] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:36.538 [2024-12-05 12:54:18.965522] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:36.538 [2024-12-05 12:54:19.012057] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:36.538 [2024-12-05 12:54:19.012098] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:36.538 [2024-12-05 12:54:19.012108] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:36.538 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.538 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:36.538 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:36.538 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:36.538 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:36.538 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.538 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.538 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.538 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:36.538 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:36.538 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:22:36.538 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:22:36.538 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:36.538 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:36.538 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.538 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.538 BaseBdev2 00:22:36.538 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.538 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:22:36.538 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:36.538 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:36.538 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:36.538 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:36.538 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:36.538 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:36.538 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.538 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.538 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.538 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:36.538 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.538 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.538 [ 00:22:36.538 { 00:22:36.538 "name": "BaseBdev2", 00:22:36.538 "aliases": [ 00:22:36.539 "df4468bf-0c5b-40b7-8b12-94099a922c45" 00:22:36.539 ], 00:22:36.539 "product_name": "Malloc disk", 00:22:36.539 "block_size": 512, 00:22:36.539 "num_blocks": 65536, 00:22:36.539 "uuid": "df4468bf-0c5b-40b7-8b12-94099a922c45", 00:22:36.539 "assigned_rate_limits": { 00:22:36.539 "rw_ios_per_sec": 0, 00:22:36.539 "rw_mbytes_per_sec": 0, 00:22:36.539 "r_mbytes_per_sec": 0, 00:22:36.539 "w_mbytes_per_sec": 0 00:22:36.539 }, 00:22:36.539 "claimed": false, 00:22:36.539 "zoned": false, 00:22:36.539 "supported_io_types": { 00:22:36.539 "read": true, 00:22:36.539 "write": true, 00:22:36.539 "unmap": true, 00:22:36.539 "flush": true, 00:22:36.539 "reset": true, 00:22:36.539 "nvme_admin": false, 00:22:36.539 "nvme_io": false, 00:22:36.539 "nvme_io_md": false, 00:22:36.539 "write_zeroes": true, 00:22:36.539 "zcopy": true, 00:22:36.539 "get_zone_info": false, 00:22:36.539 "zone_management": false, 00:22:36.539 "zone_append": false, 00:22:36.539 "compare": false, 00:22:36.539 "compare_and_write": false, 00:22:36.539 "abort": true, 00:22:36.539 "seek_hole": false, 00:22:36.539 "seek_data": false, 00:22:36.539 "copy": true, 00:22:36.539 "nvme_iov_md": false 00:22:36.539 }, 00:22:36.539 "memory_domains": [ 00:22:36.539 { 00:22:36.539 "dma_device_id": "system", 00:22:36.539 "dma_device_type": 1 00:22:36.539 }, 00:22:36.539 { 00:22:36.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:36.539 "dma_device_type": 2 00:22:36.539 } 00:22:36.539 ], 00:22:36.539 "driver_specific": {} 00:22:36.539 } 00:22:36.539 ] 00:22:36.539 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.539 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:36.539 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:36.539 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:36.539 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:36.539 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.539 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.539 BaseBdev3 00:22:36.539 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.539 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:22:36.539 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:22:36.539 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.798 [ 00:22:36.798 { 00:22:36.798 "name": "BaseBdev3", 00:22:36.798 "aliases": [ 00:22:36.798 "36794e44-1f71-44f1-b759-4bfbd0a48100" 00:22:36.798 ], 00:22:36.798 "product_name": "Malloc disk", 00:22:36.798 "block_size": 512, 00:22:36.798 "num_blocks": 65536, 00:22:36.798 "uuid": "36794e44-1f71-44f1-b759-4bfbd0a48100", 00:22:36.798 "assigned_rate_limits": { 00:22:36.798 "rw_ios_per_sec": 0, 00:22:36.798 "rw_mbytes_per_sec": 0, 00:22:36.798 "r_mbytes_per_sec": 0, 00:22:36.798 "w_mbytes_per_sec": 0 00:22:36.798 }, 00:22:36.798 "claimed": false, 00:22:36.798 "zoned": false, 00:22:36.798 "supported_io_types": { 00:22:36.798 "read": true, 00:22:36.798 "write": true, 00:22:36.798 "unmap": true, 00:22:36.798 "flush": true, 00:22:36.798 "reset": true, 00:22:36.798 "nvme_admin": false, 00:22:36.798 "nvme_io": false, 00:22:36.798 "nvme_io_md": false, 00:22:36.798 "write_zeroes": true, 00:22:36.798 "zcopy": true, 00:22:36.798 "get_zone_info": false, 00:22:36.798 "zone_management": false, 00:22:36.798 "zone_append": false, 00:22:36.798 "compare": false, 00:22:36.798 "compare_and_write": false, 00:22:36.798 "abort": true, 00:22:36.798 "seek_hole": false, 00:22:36.798 "seek_data": false, 00:22:36.798 "copy": true, 00:22:36.798 "nvme_iov_md": false 00:22:36.798 }, 00:22:36.798 "memory_domains": [ 00:22:36.798 { 00:22:36.798 "dma_device_id": "system", 00:22:36.798 "dma_device_type": 1 00:22:36.798 }, 00:22:36.798 { 00:22:36.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:36.798 "dma_device_type": 2 00:22:36.798 } 00:22:36.798 ], 00:22:36.798 "driver_specific": {} 00:22:36.798 } 00:22:36.798 ] 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.798 BaseBdev4 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.798 [ 00:22:36.798 { 00:22:36.798 "name": "BaseBdev4", 00:22:36.798 "aliases": [ 00:22:36.798 "812fda7b-c36b-49f1-ab0b-b8e57747e629" 00:22:36.798 ], 00:22:36.798 "product_name": "Malloc disk", 00:22:36.798 "block_size": 512, 00:22:36.798 "num_blocks": 65536, 00:22:36.798 "uuid": "812fda7b-c36b-49f1-ab0b-b8e57747e629", 00:22:36.798 "assigned_rate_limits": { 00:22:36.798 "rw_ios_per_sec": 0, 00:22:36.798 "rw_mbytes_per_sec": 0, 00:22:36.798 "r_mbytes_per_sec": 0, 00:22:36.798 "w_mbytes_per_sec": 0 00:22:36.798 }, 00:22:36.798 "claimed": false, 00:22:36.798 "zoned": false, 00:22:36.798 "supported_io_types": { 00:22:36.798 "read": true, 00:22:36.798 "write": true, 00:22:36.798 "unmap": true, 00:22:36.798 "flush": true, 00:22:36.798 "reset": true, 00:22:36.798 "nvme_admin": false, 00:22:36.798 "nvme_io": false, 00:22:36.798 "nvme_io_md": false, 00:22:36.798 "write_zeroes": true, 00:22:36.798 "zcopy": true, 00:22:36.798 "get_zone_info": false, 00:22:36.798 "zone_management": false, 00:22:36.798 "zone_append": false, 00:22:36.798 "compare": false, 00:22:36.798 "compare_and_write": false, 00:22:36.798 "abort": true, 00:22:36.798 "seek_hole": false, 00:22:36.798 "seek_data": false, 00:22:36.798 "copy": true, 00:22:36.798 "nvme_iov_md": false 00:22:36.798 }, 00:22:36.798 "memory_domains": [ 00:22:36.798 { 00:22:36.798 "dma_device_id": "system", 00:22:36.798 "dma_device_type": 1 00:22:36.798 }, 00:22:36.798 { 00:22:36.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:36.798 "dma_device_type": 2 00:22:36.798 } 00:22:36.798 ], 00:22:36.798 "driver_specific": {} 00:22:36.798 } 00:22:36.798 ] 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.798 [2024-12-05 12:54:19.203029] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:36.798 [2024-12-05 12:54:19.203148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:36.798 [2024-12-05 12:54:19.203169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:36.798 [2024-12-05 12:54:19.204772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:36.798 [2024-12-05 12:54:19.204810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.798 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:36.799 "name": "Existed_Raid", 00:22:36.799 "uuid": "a141979f-bbf1-42ca-b76a-14d6d34ae2cd", 00:22:36.799 "strip_size_kb": 0, 00:22:36.799 "state": "configuring", 00:22:36.799 "raid_level": "raid1", 00:22:36.799 "superblock": true, 00:22:36.799 "num_base_bdevs": 4, 00:22:36.799 "num_base_bdevs_discovered": 3, 00:22:36.799 "num_base_bdevs_operational": 4, 00:22:36.799 "base_bdevs_list": [ 00:22:36.799 { 00:22:36.799 "name": "BaseBdev1", 00:22:36.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:36.799 "is_configured": false, 00:22:36.799 "data_offset": 0, 00:22:36.799 "data_size": 0 00:22:36.799 }, 00:22:36.799 { 00:22:36.799 "name": "BaseBdev2", 00:22:36.799 "uuid": "df4468bf-0c5b-40b7-8b12-94099a922c45", 00:22:36.799 "is_configured": true, 00:22:36.799 "data_offset": 2048, 00:22:36.799 "data_size": 63488 00:22:36.799 }, 00:22:36.799 { 00:22:36.799 "name": "BaseBdev3", 00:22:36.799 "uuid": "36794e44-1f71-44f1-b759-4bfbd0a48100", 00:22:36.799 "is_configured": true, 00:22:36.799 "data_offset": 2048, 00:22:36.799 "data_size": 63488 00:22:36.799 }, 00:22:36.799 { 00:22:36.799 "name": "BaseBdev4", 00:22:36.799 "uuid": "812fda7b-c36b-49f1-ab0b-b8e57747e629", 00:22:36.799 "is_configured": true, 00:22:36.799 "data_offset": 2048, 00:22:36.799 "data_size": 63488 00:22:36.799 } 00:22:36.799 ] 00:22:36.799 }' 00:22:36.799 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:36.799 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:37.057 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:22:37.057 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.057 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:37.057 [2024-12-05 12:54:19.519103] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:37.057 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.057 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:37.057 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:37.057 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:37.057 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:37.057 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:37.057 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:37.057 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:37.057 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:37.057 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:37.057 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:37.057 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.057 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.057 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:37.057 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:37.057 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.057 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:37.057 "name": "Existed_Raid", 00:22:37.057 "uuid": "a141979f-bbf1-42ca-b76a-14d6d34ae2cd", 00:22:37.057 "strip_size_kb": 0, 00:22:37.057 "state": "configuring", 00:22:37.057 "raid_level": "raid1", 00:22:37.057 "superblock": true, 00:22:37.057 "num_base_bdevs": 4, 00:22:37.057 "num_base_bdevs_discovered": 2, 00:22:37.057 "num_base_bdevs_operational": 4, 00:22:37.057 "base_bdevs_list": [ 00:22:37.057 { 00:22:37.057 "name": "BaseBdev1", 00:22:37.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:37.057 "is_configured": false, 00:22:37.057 "data_offset": 0, 00:22:37.057 "data_size": 0 00:22:37.057 }, 00:22:37.057 { 00:22:37.057 "name": null, 00:22:37.057 "uuid": "df4468bf-0c5b-40b7-8b12-94099a922c45", 00:22:37.057 "is_configured": false, 00:22:37.057 "data_offset": 0, 00:22:37.057 "data_size": 63488 00:22:37.057 }, 00:22:37.057 { 00:22:37.057 "name": "BaseBdev3", 00:22:37.057 "uuid": "36794e44-1f71-44f1-b759-4bfbd0a48100", 00:22:37.057 "is_configured": true, 00:22:37.057 "data_offset": 2048, 00:22:37.057 "data_size": 63488 00:22:37.057 }, 00:22:37.057 { 00:22:37.057 "name": "BaseBdev4", 00:22:37.057 "uuid": "812fda7b-c36b-49f1-ab0b-b8e57747e629", 00:22:37.057 "is_configured": true, 00:22:37.057 "data_offset": 2048, 00:22:37.057 "data_size": 63488 00:22:37.057 } 00:22:37.057 ] 00:22:37.057 }' 00:22:37.057 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:37.057 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:37.315 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.315 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.315 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:37.315 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:37.315 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.315 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:22:37.315 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:37.315 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.315 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:37.572 [2024-12-05 12:54:19.909605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:37.572 BaseBdev1 00:22:37.572 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.572 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:22:37.572 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:37.572 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:37.572 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:37.572 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:37.572 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:37.572 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:37.572 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.572 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:37.572 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.572 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:37.572 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.572 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:37.572 [ 00:22:37.572 { 00:22:37.572 "name": "BaseBdev1", 00:22:37.572 "aliases": [ 00:22:37.572 "486fe4f0-ac6e-47cd-8828-462f00843eb4" 00:22:37.572 ], 00:22:37.572 "product_name": "Malloc disk", 00:22:37.572 "block_size": 512, 00:22:37.572 "num_blocks": 65536, 00:22:37.572 "uuid": "486fe4f0-ac6e-47cd-8828-462f00843eb4", 00:22:37.572 "assigned_rate_limits": { 00:22:37.572 "rw_ios_per_sec": 0, 00:22:37.572 "rw_mbytes_per_sec": 0, 00:22:37.572 "r_mbytes_per_sec": 0, 00:22:37.572 "w_mbytes_per_sec": 0 00:22:37.572 }, 00:22:37.572 "claimed": true, 00:22:37.572 "claim_type": "exclusive_write", 00:22:37.572 "zoned": false, 00:22:37.572 "supported_io_types": { 00:22:37.572 "read": true, 00:22:37.572 "write": true, 00:22:37.572 "unmap": true, 00:22:37.572 "flush": true, 00:22:37.572 "reset": true, 00:22:37.572 "nvme_admin": false, 00:22:37.572 "nvme_io": false, 00:22:37.572 "nvme_io_md": false, 00:22:37.572 "write_zeroes": true, 00:22:37.572 "zcopy": true, 00:22:37.572 "get_zone_info": false, 00:22:37.572 "zone_management": false, 00:22:37.572 "zone_append": false, 00:22:37.572 "compare": false, 00:22:37.572 "compare_and_write": false, 00:22:37.572 "abort": true, 00:22:37.572 "seek_hole": false, 00:22:37.572 "seek_data": false, 00:22:37.572 "copy": true, 00:22:37.572 "nvme_iov_md": false 00:22:37.572 }, 00:22:37.572 "memory_domains": [ 00:22:37.572 { 00:22:37.572 "dma_device_id": "system", 00:22:37.572 "dma_device_type": 1 00:22:37.572 }, 00:22:37.572 { 00:22:37.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:37.572 "dma_device_type": 2 00:22:37.572 } 00:22:37.572 ], 00:22:37.572 "driver_specific": {} 00:22:37.572 } 00:22:37.572 ] 00:22:37.572 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.572 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:37.572 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:37.572 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:37.572 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:37.572 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:37.572 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:37.572 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:37.572 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:37.572 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:37.572 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:37.572 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:37.572 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.572 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.572 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:37.572 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:37.572 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.573 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:37.573 "name": "Existed_Raid", 00:22:37.573 "uuid": "a141979f-bbf1-42ca-b76a-14d6d34ae2cd", 00:22:37.573 "strip_size_kb": 0, 00:22:37.573 "state": "configuring", 00:22:37.573 "raid_level": "raid1", 00:22:37.573 "superblock": true, 00:22:37.573 "num_base_bdevs": 4, 00:22:37.573 "num_base_bdevs_discovered": 3, 00:22:37.573 "num_base_bdevs_operational": 4, 00:22:37.573 "base_bdevs_list": [ 00:22:37.573 { 00:22:37.573 "name": "BaseBdev1", 00:22:37.573 "uuid": "486fe4f0-ac6e-47cd-8828-462f00843eb4", 00:22:37.573 "is_configured": true, 00:22:37.573 "data_offset": 2048, 00:22:37.573 "data_size": 63488 00:22:37.573 }, 00:22:37.573 { 00:22:37.573 "name": null, 00:22:37.573 "uuid": "df4468bf-0c5b-40b7-8b12-94099a922c45", 00:22:37.573 "is_configured": false, 00:22:37.573 "data_offset": 0, 00:22:37.573 "data_size": 63488 00:22:37.573 }, 00:22:37.573 { 00:22:37.573 "name": "BaseBdev3", 00:22:37.573 "uuid": "36794e44-1f71-44f1-b759-4bfbd0a48100", 00:22:37.573 "is_configured": true, 00:22:37.573 "data_offset": 2048, 00:22:37.573 "data_size": 63488 00:22:37.573 }, 00:22:37.573 { 00:22:37.573 "name": "BaseBdev4", 00:22:37.573 "uuid": "812fda7b-c36b-49f1-ab0b-b8e57747e629", 00:22:37.573 "is_configured": true, 00:22:37.573 "data_offset": 2048, 00:22:37.573 "data_size": 63488 00:22:37.573 } 00:22:37.573 ] 00:22:37.573 }' 00:22:37.573 12:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:37.573 12:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:37.831 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.831 12:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.831 12:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:37.831 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:37.831 12:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.831 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:22:37.831 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:22:37.831 12:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.831 12:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:37.831 [2024-12-05 12:54:20.297749] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:37.831 12:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.831 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:37.831 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:37.831 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:37.831 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:37.831 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:37.831 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:37.831 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:37.831 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:37.831 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:37.831 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:37.831 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.831 12:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.831 12:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:37.831 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:37.831 12:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.831 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:37.831 "name": "Existed_Raid", 00:22:37.831 "uuid": "a141979f-bbf1-42ca-b76a-14d6d34ae2cd", 00:22:37.831 "strip_size_kb": 0, 00:22:37.831 "state": "configuring", 00:22:37.831 "raid_level": "raid1", 00:22:37.831 "superblock": true, 00:22:37.831 "num_base_bdevs": 4, 00:22:37.831 "num_base_bdevs_discovered": 2, 00:22:37.831 "num_base_bdevs_operational": 4, 00:22:37.831 "base_bdevs_list": [ 00:22:37.831 { 00:22:37.831 "name": "BaseBdev1", 00:22:37.831 "uuid": "486fe4f0-ac6e-47cd-8828-462f00843eb4", 00:22:37.831 "is_configured": true, 00:22:37.831 "data_offset": 2048, 00:22:37.831 "data_size": 63488 00:22:37.831 }, 00:22:37.831 { 00:22:37.831 "name": null, 00:22:37.831 "uuid": "df4468bf-0c5b-40b7-8b12-94099a922c45", 00:22:37.831 "is_configured": false, 00:22:37.831 "data_offset": 0, 00:22:37.831 "data_size": 63488 00:22:37.831 }, 00:22:37.831 { 00:22:37.831 "name": null, 00:22:37.831 "uuid": "36794e44-1f71-44f1-b759-4bfbd0a48100", 00:22:37.831 "is_configured": false, 00:22:37.831 "data_offset": 0, 00:22:37.831 "data_size": 63488 00:22:37.831 }, 00:22:37.831 { 00:22:37.831 "name": "BaseBdev4", 00:22:37.831 "uuid": "812fda7b-c36b-49f1-ab0b-b8e57747e629", 00:22:37.831 "is_configured": true, 00:22:37.831 "data_offset": 2048, 00:22:37.831 "data_size": 63488 00:22:37.831 } 00:22:37.831 ] 00:22:37.831 }' 00:22:37.831 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:37.831 12:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:38.089 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:38.089 12:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.089 12:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:38.089 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:38.089 12:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.089 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:22:38.089 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:38.089 12:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.089 12:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:38.089 [2024-12-05 12:54:20.637794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:38.089 12:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.089 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:38.089 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:38.089 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:38.089 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:38.089 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:38.089 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:38.089 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:38.089 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:38.089 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:38.089 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:38.089 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:38.089 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:38.089 12:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.089 12:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:38.089 12:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.089 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:38.089 "name": "Existed_Raid", 00:22:38.089 "uuid": "a141979f-bbf1-42ca-b76a-14d6d34ae2cd", 00:22:38.089 "strip_size_kb": 0, 00:22:38.089 "state": "configuring", 00:22:38.089 "raid_level": "raid1", 00:22:38.089 "superblock": true, 00:22:38.089 "num_base_bdevs": 4, 00:22:38.089 "num_base_bdevs_discovered": 3, 00:22:38.089 "num_base_bdevs_operational": 4, 00:22:38.089 "base_bdevs_list": [ 00:22:38.090 { 00:22:38.090 "name": "BaseBdev1", 00:22:38.090 "uuid": "486fe4f0-ac6e-47cd-8828-462f00843eb4", 00:22:38.090 "is_configured": true, 00:22:38.090 "data_offset": 2048, 00:22:38.090 "data_size": 63488 00:22:38.090 }, 00:22:38.090 { 00:22:38.090 "name": null, 00:22:38.090 "uuid": "df4468bf-0c5b-40b7-8b12-94099a922c45", 00:22:38.090 "is_configured": false, 00:22:38.090 "data_offset": 0, 00:22:38.090 "data_size": 63488 00:22:38.090 }, 00:22:38.090 { 00:22:38.090 "name": "BaseBdev3", 00:22:38.090 "uuid": "36794e44-1f71-44f1-b759-4bfbd0a48100", 00:22:38.090 "is_configured": true, 00:22:38.090 "data_offset": 2048, 00:22:38.090 "data_size": 63488 00:22:38.090 }, 00:22:38.090 { 00:22:38.090 "name": "BaseBdev4", 00:22:38.090 "uuid": "812fda7b-c36b-49f1-ab0b-b8e57747e629", 00:22:38.090 "is_configured": true, 00:22:38.090 "data_offset": 2048, 00:22:38.090 "data_size": 63488 00:22:38.090 } 00:22:38.090 ] 00:22:38.090 }' 00:22:38.090 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:38.090 12:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:38.655 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:38.655 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:38.655 12:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.655 12:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:38.655 12:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.655 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:22:38.655 12:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:38.655 12:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.655 12:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:38.655 [2024-12-05 12:54:20.969890] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:38.655 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.655 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:38.655 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:38.655 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:38.655 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:38.655 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:38.655 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:38.655 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:38.655 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:38.655 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:38.655 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:38.655 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:38.655 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.655 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:38.655 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:38.655 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.655 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:38.655 "name": "Existed_Raid", 00:22:38.655 "uuid": "a141979f-bbf1-42ca-b76a-14d6d34ae2cd", 00:22:38.655 "strip_size_kb": 0, 00:22:38.655 "state": "configuring", 00:22:38.655 "raid_level": "raid1", 00:22:38.655 "superblock": true, 00:22:38.655 "num_base_bdevs": 4, 00:22:38.655 "num_base_bdevs_discovered": 2, 00:22:38.655 "num_base_bdevs_operational": 4, 00:22:38.655 "base_bdevs_list": [ 00:22:38.655 { 00:22:38.655 "name": null, 00:22:38.655 "uuid": "486fe4f0-ac6e-47cd-8828-462f00843eb4", 00:22:38.655 "is_configured": false, 00:22:38.655 "data_offset": 0, 00:22:38.655 "data_size": 63488 00:22:38.655 }, 00:22:38.655 { 00:22:38.655 "name": null, 00:22:38.655 "uuid": "df4468bf-0c5b-40b7-8b12-94099a922c45", 00:22:38.655 "is_configured": false, 00:22:38.655 "data_offset": 0, 00:22:38.655 "data_size": 63488 00:22:38.655 }, 00:22:38.655 { 00:22:38.655 "name": "BaseBdev3", 00:22:38.655 "uuid": "36794e44-1f71-44f1-b759-4bfbd0a48100", 00:22:38.655 "is_configured": true, 00:22:38.655 "data_offset": 2048, 00:22:38.655 "data_size": 63488 00:22:38.655 }, 00:22:38.655 { 00:22:38.655 "name": "BaseBdev4", 00:22:38.655 "uuid": "812fda7b-c36b-49f1-ab0b-b8e57747e629", 00:22:38.655 "is_configured": true, 00:22:38.655 "data_offset": 2048, 00:22:38.655 "data_size": 63488 00:22:38.655 } 00:22:38.655 ] 00:22:38.655 }' 00:22:38.655 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:38.655 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:38.913 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:38.913 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.913 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:38.913 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:38.913 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.913 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:22:38.913 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:38.913 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.913 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:38.913 [2024-12-05 12:54:21.364409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:38.913 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.913 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:38.913 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:38.914 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:38.914 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:38.914 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:38.914 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:38.914 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:38.914 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:38.914 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:38.914 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:38.914 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:38.914 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:38.914 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.914 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:38.914 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.914 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:38.914 "name": "Existed_Raid", 00:22:38.914 "uuid": "a141979f-bbf1-42ca-b76a-14d6d34ae2cd", 00:22:38.914 "strip_size_kb": 0, 00:22:38.914 "state": "configuring", 00:22:38.914 "raid_level": "raid1", 00:22:38.914 "superblock": true, 00:22:38.914 "num_base_bdevs": 4, 00:22:38.914 "num_base_bdevs_discovered": 3, 00:22:38.914 "num_base_bdevs_operational": 4, 00:22:38.914 "base_bdevs_list": [ 00:22:38.914 { 00:22:38.914 "name": null, 00:22:38.914 "uuid": "486fe4f0-ac6e-47cd-8828-462f00843eb4", 00:22:38.914 "is_configured": false, 00:22:38.914 "data_offset": 0, 00:22:38.914 "data_size": 63488 00:22:38.914 }, 00:22:38.914 { 00:22:38.914 "name": "BaseBdev2", 00:22:38.914 "uuid": "df4468bf-0c5b-40b7-8b12-94099a922c45", 00:22:38.914 "is_configured": true, 00:22:38.914 "data_offset": 2048, 00:22:38.914 "data_size": 63488 00:22:38.914 }, 00:22:38.914 { 00:22:38.914 "name": "BaseBdev3", 00:22:38.914 "uuid": "36794e44-1f71-44f1-b759-4bfbd0a48100", 00:22:38.914 "is_configured": true, 00:22:38.914 "data_offset": 2048, 00:22:38.914 "data_size": 63488 00:22:38.914 }, 00:22:38.914 { 00:22:38.914 "name": "BaseBdev4", 00:22:38.914 "uuid": "812fda7b-c36b-49f1-ab0b-b8e57747e629", 00:22:38.914 "is_configured": true, 00:22:38.914 "data_offset": 2048, 00:22:38.914 "data_size": 63488 00:22:38.914 } 00:22:38.914 ] 00:22:38.914 }' 00:22:38.914 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:38.914 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:39.172 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:39.172 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.172 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:39.172 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:39.172 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.172 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:22:39.172 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:39.172 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.172 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:39.172 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:39.172 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.172 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 486fe4f0-ac6e-47cd-8828-462f00843eb4 00:22:39.172 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.172 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:39.430 [2024-12-05 12:54:21.758673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:39.430 [2024-12-05 12:54:21.758834] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:39.430 [2024-12-05 12:54:21.758846] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:39.430 NewBaseBdev 00:22:39.430 [2024-12-05 12:54:21.759053] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:22:39.430 [2024-12-05 12:54:21.759160] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:39.430 [2024-12-05 12:54:21.759167] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:22:39.430 [2024-12-05 12:54:21.759260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:39.430 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.430 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:22:39.430 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:22:39.430 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:39.430 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:39.430 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:39.430 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:39.430 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:39.430 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.430 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:39.430 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.430 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:39.430 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.430 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:39.430 [ 00:22:39.430 { 00:22:39.430 "name": "NewBaseBdev", 00:22:39.430 "aliases": [ 00:22:39.430 "486fe4f0-ac6e-47cd-8828-462f00843eb4" 00:22:39.430 ], 00:22:39.430 "product_name": "Malloc disk", 00:22:39.430 "block_size": 512, 00:22:39.430 "num_blocks": 65536, 00:22:39.430 "uuid": "486fe4f0-ac6e-47cd-8828-462f00843eb4", 00:22:39.430 "assigned_rate_limits": { 00:22:39.430 "rw_ios_per_sec": 0, 00:22:39.430 "rw_mbytes_per_sec": 0, 00:22:39.430 "r_mbytes_per_sec": 0, 00:22:39.430 "w_mbytes_per_sec": 0 00:22:39.430 }, 00:22:39.430 "claimed": true, 00:22:39.430 "claim_type": "exclusive_write", 00:22:39.430 "zoned": false, 00:22:39.430 "supported_io_types": { 00:22:39.430 "read": true, 00:22:39.430 "write": true, 00:22:39.430 "unmap": true, 00:22:39.430 "flush": true, 00:22:39.430 "reset": true, 00:22:39.430 "nvme_admin": false, 00:22:39.430 "nvme_io": false, 00:22:39.430 "nvme_io_md": false, 00:22:39.430 "write_zeroes": true, 00:22:39.430 "zcopy": true, 00:22:39.430 "get_zone_info": false, 00:22:39.430 "zone_management": false, 00:22:39.430 "zone_append": false, 00:22:39.430 "compare": false, 00:22:39.430 "compare_and_write": false, 00:22:39.430 "abort": true, 00:22:39.430 "seek_hole": false, 00:22:39.430 "seek_data": false, 00:22:39.430 "copy": true, 00:22:39.430 "nvme_iov_md": false 00:22:39.430 }, 00:22:39.430 "memory_domains": [ 00:22:39.430 { 00:22:39.430 "dma_device_id": "system", 00:22:39.430 "dma_device_type": 1 00:22:39.430 }, 00:22:39.430 { 00:22:39.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:39.430 "dma_device_type": 2 00:22:39.430 } 00:22:39.430 ], 00:22:39.430 "driver_specific": {} 00:22:39.430 } 00:22:39.430 ] 00:22:39.430 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.430 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:39.430 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:22:39.430 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:39.430 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:39.430 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:39.430 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:39.430 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:39.430 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:39.430 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:39.430 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:39.430 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:39.430 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:39.430 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:39.430 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.430 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:39.430 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.430 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:39.430 "name": "Existed_Raid", 00:22:39.430 "uuid": "a141979f-bbf1-42ca-b76a-14d6d34ae2cd", 00:22:39.430 "strip_size_kb": 0, 00:22:39.430 "state": "online", 00:22:39.430 "raid_level": "raid1", 00:22:39.430 "superblock": true, 00:22:39.430 "num_base_bdevs": 4, 00:22:39.430 "num_base_bdevs_discovered": 4, 00:22:39.430 "num_base_bdevs_operational": 4, 00:22:39.430 "base_bdevs_list": [ 00:22:39.430 { 00:22:39.430 "name": "NewBaseBdev", 00:22:39.430 "uuid": "486fe4f0-ac6e-47cd-8828-462f00843eb4", 00:22:39.430 "is_configured": true, 00:22:39.430 "data_offset": 2048, 00:22:39.430 "data_size": 63488 00:22:39.430 }, 00:22:39.430 { 00:22:39.430 "name": "BaseBdev2", 00:22:39.430 "uuid": "df4468bf-0c5b-40b7-8b12-94099a922c45", 00:22:39.430 "is_configured": true, 00:22:39.430 "data_offset": 2048, 00:22:39.430 "data_size": 63488 00:22:39.430 }, 00:22:39.430 { 00:22:39.430 "name": "BaseBdev3", 00:22:39.430 "uuid": "36794e44-1f71-44f1-b759-4bfbd0a48100", 00:22:39.431 "is_configured": true, 00:22:39.431 "data_offset": 2048, 00:22:39.431 "data_size": 63488 00:22:39.431 }, 00:22:39.431 { 00:22:39.431 "name": "BaseBdev4", 00:22:39.431 "uuid": "812fda7b-c36b-49f1-ab0b-b8e57747e629", 00:22:39.431 "is_configured": true, 00:22:39.431 "data_offset": 2048, 00:22:39.431 "data_size": 63488 00:22:39.431 } 00:22:39.431 ] 00:22:39.431 }' 00:22:39.431 12:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:39.431 12:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:39.689 12:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:22:39.689 12:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:39.689 12:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:39.689 12:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:39.689 12:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:22:39.689 12:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:39.689 12:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:39.689 12:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:39.689 12:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.689 12:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:39.689 [2024-12-05 12:54:22.099065] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:39.689 12:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.689 12:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:39.689 "name": "Existed_Raid", 00:22:39.689 "aliases": [ 00:22:39.689 "a141979f-bbf1-42ca-b76a-14d6d34ae2cd" 00:22:39.689 ], 00:22:39.689 "product_name": "Raid Volume", 00:22:39.689 "block_size": 512, 00:22:39.689 "num_blocks": 63488, 00:22:39.689 "uuid": "a141979f-bbf1-42ca-b76a-14d6d34ae2cd", 00:22:39.689 "assigned_rate_limits": { 00:22:39.689 "rw_ios_per_sec": 0, 00:22:39.689 "rw_mbytes_per_sec": 0, 00:22:39.689 "r_mbytes_per_sec": 0, 00:22:39.689 "w_mbytes_per_sec": 0 00:22:39.689 }, 00:22:39.689 "claimed": false, 00:22:39.689 "zoned": false, 00:22:39.689 "supported_io_types": { 00:22:39.689 "read": true, 00:22:39.689 "write": true, 00:22:39.689 "unmap": false, 00:22:39.689 "flush": false, 00:22:39.689 "reset": true, 00:22:39.689 "nvme_admin": false, 00:22:39.689 "nvme_io": false, 00:22:39.689 "nvme_io_md": false, 00:22:39.689 "write_zeroes": true, 00:22:39.689 "zcopy": false, 00:22:39.689 "get_zone_info": false, 00:22:39.689 "zone_management": false, 00:22:39.689 "zone_append": false, 00:22:39.689 "compare": false, 00:22:39.689 "compare_and_write": false, 00:22:39.689 "abort": false, 00:22:39.689 "seek_hole": false, 00:22:39.689 "seek_data": false, 00:22:39.689 "copy": false, 00:22:39.689 "nvme_iov_md": false 00:22:39.689 }, 00:22:39.689 "memory_domains": [ 00:22:39.689 { 00:22:39.689 "dma_device_id": "system", 00:22:39.689 "dma_device_type": 1 00:22:39.689 }, 00:22:39.689 { 00:22:39.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:39.689 "dma_device_type": 2 00:22:39.689 }, 00:22:39.689 { 00:22:39.689 "dma_device_id": "system", 00:22:39.689 "dma_device_type": 1 00:22:39.689 }, 00:22:39.689 { 00:22:39.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:39.689 "dma_device_type": 2 00:22:39.689 }, 00:22:39.689 { 00:22:39.689 "dma_device_id": "system", 00:22:39.689 "dma_device_type": 1 00:22:39.689 }, 00:22:39.689 { 00:22:39.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:39.689 "dma_device_type": 2 00:22:39.689 }, 00:22:39.689 { 00:22:39.689 "dma_device_id": "system", 00:22:39.689 "dma_device_type": 1 00:22:39.689 }, 00:22:39.689 { 00:22:39.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:39.689 "dma_device_type": 2 00:22:39.689 } 00:22:39.689 ], 00:22:39.689 "driver_specific": { 00:22:39.689 "raid": { 00:22:39.690 "uuid": "a141979f-bbf1-42ca-b76a-14d6d34ae2cd", 00:22:39.690 "strip_size_kb": 0, 00:22:39.690 "state": "online", 00:22:39.690 "raid_level": "raid1", 00:22:39.690 "superblock": true, 00:22:39.690 "num_base_bdevs": 4, 00:22:39.690 "num_base_bdevs_discovered": 4, 00:22:39.690 "num_base_bdevs_operational": 4, 00:22:39.690 "base_bdevs_list": [ 00:22:39.690 { 00:22:39.690 "name": "NewBaseBdev", 00:22:39.690 "uuid": "486fe4f0-ac6e-47cd-8828-462f00843eb4", 00:22:39.690 "is_configured": true, 00:22:39.690 "data_offset": 2048, 00:22:39.690 "data_size": 63488 00:22:39.690 }, 00:22:39.690 { 00:22:39.690 "name": "BaseBdev2", 00:22:39.690 "uuid": "df4468bf-0c5b-40b7-8b12-94099a922c45", 00:22:39.690 "is_configured": true, 00:22:39.690 "data_offset": 2048, 00:22:39.690 "data_size": 63488 00:22:39.690 }, 00:22:39.690 { 00:22:39.690 "name": "BaseBdev3", 00:22:39.690 "uuid": "36794e44-1f71-44f1-b759-4bfbd0a48100", 00:22:39.690 "is_configured": true, 00:22:39.690 "data_offset": 2048, 00:22:39.690 "data_size": 63488 00:22:39.690 }, 00:22:39.690 { 00:22:39.690 "name": "BaseBdev4", 00:22:39.690 "uuid": "812fda7b-c36b-49f1-ab0b-b8e57747e629", 00:22:39.690 "is_configured": true, 00:22:39.690 "data_offset": 2048, 00:22:39.690 "data_size": 63488 00:22:39.690 } 00:22:39.690 ] 00:22:39.690 } 00:22:39.690 } 00:22:39.690 }' 00:22:39.690 12:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:39.690 12:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:22:39.690 BaseBdev2 00:22:39.690 BaseBdev3 00:22:39.690 BaseBdev4' 00:22:39.690 12:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:39.690 12:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:39.690 12:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:39.690 12:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:22:39.690 12:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.690 12:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:39.690 12:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:39.690 12:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.690 12:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:39.690 12:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:39.690 12:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:39.690 12:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:39.690 12:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:39.690 12:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.690 12:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:39.690 12:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.690 12:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:39.690 12:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:39.690 12:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:39.690 12:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:39.690 12:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:39.690 12:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.690 12:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:39.690 12:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.690 12:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:39.690 12:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:39.690 12:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:39.690 12:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:22:39.690 12:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.690 12:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:39.690 12:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:39.947 12:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.947 12:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:39.947 12:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:39.947 12:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:39.947 12:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.947 12:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:39.947 [2024-12-05 12:54:22.302805] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:39.947 [2024-12-05 12:54:22.302908] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:39.947 [2024-12-05 12:54:22.302978] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:39.947 [2024-12-05 12:54:22.303213] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:39.947 [2024-12-05 12:54:22.303224] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:22:39.947 12:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.947 12:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71724 00:22:39.947 12:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 71724 ']' 00:22:39.947 12:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 71724 00:22:39.947 12:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:22:39.947 12:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:39.947 12:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71724 00:22:39.947 killing process with pid 71724 00:22:39.947 12:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:39.947 12:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:39.947 12:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71724' 00:22:39.947 12:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 71724 00:22:39.947 [2024-12-05 12:54:22.332106] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:39.947 12:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 71724 00:22:39.947 [2024-12-05 12:54:22.528983] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:40.880 12:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:22:40.880 00:22:40.880 real 0m8.029s 00:22:40.880 user 0m13.032s 00:22:40.880 sys 0m1.276s 00:22:40.880 ************************************ 00:22:40.880 END TEST raid_state_function_test_sb 00:22:40.880 ************************************ 00:22:40.880 12:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:40.880 12:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:40.880 12:54:23 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:22:40.880 12:54:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:40.881 12:54:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:40.881 12:54:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:40.881 ************************************ 00:22:40.881 START TEST raid_superblock_test 00:22:40.881 ************************************ 00:22:40.881 12:54:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:22:40.881 12:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:22:40.881 12:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:22:40.881 12:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:22:40.881 12:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:22:40.881 12:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:22:40.881 12:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:22:40.881 12:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:22:40.881 12:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:22:40.881 12:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:22:40.881 12:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:22:40.881 12:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:22:40.881 12:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:22:40.881 12:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:22:40.881 12:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:22:40.881 12:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:22:40.881 12:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72356 00:22:40.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.881 12:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72356 00:22:40.881 12:54:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72356 ']' 00:22:40.881 12:54:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.881 12:54:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:40.881 12:54:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:22:40.881 12:54:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.881 12:54:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:40.881 12:54:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.881 [2024-12-05 12:54:23.214089] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:22:40.881 [2024-12-05 12:54:23.214212] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72356 ] 00:22:40.881 [2024-12-05 12:54:23.374049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.138 [2024-12-05 12:54:23.473738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.138 [2024-12-05 12:54:23.611397] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:41.138 [2024-12-05 12:54:23.611432] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.704 malloc1 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.704 [2024-12-05 12:54:24.104333] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:41.704 [2024-12-05 12:54:24.104391] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:41.704 [2024-12-05 12:54:24.104411] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:41.704 [2024-12-05 12:54:24.104420] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:41.704 [2024-12-05 12:54:24.106535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:41.704 [2024-12-05 12:54:24.106673] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:41.704 pt1 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.704 malloc2 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.704 [2024-12-05 12:54:24.140015] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:41.704 [2024-12-05 12:54:24.140099] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:41.704 [2024-12-05 12:54:24.140183] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:41.704 [2024-12-05 12:54:24.140205] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:41.704 [2024-12-05 12:54:24.142351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:41.704 [2024-12-05 12:54:24.142503] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:41.704 pt2 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.704 malloc3 00:22:41.704 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.705 [2024-12-05 12:54:24.187716] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:41.705 [2024-12-05 12:54:24.187764] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:41.705 [2024-12-05 12:54:24.187785] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:41.705 [2024-12-05 12:54:24.187794] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:41.705 [2024-12-05 12:54:24.189875] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:41.705 [2024-12-05 12:54:24.189908] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:41.705 pt3 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.705 malloc4 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.705 [2024-12-05 12:54:24.223417] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:41.705 [2024-12-05 12:54:24.223581] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:41.705 [2024-12-05 12:54:24.223603] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:22:41.705 [2024-12-05 12:54:24.223612] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:41.705 [2024-12-05 12:54:24.225689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:41.705 [2024-12-05 12:54:24.225719] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:41.705 pt4 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.705 [2024-12-05 12:54:24.231445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:41.705 [2024-12-05 12:54:24.233351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:41.705 [2024-12-05 12:54:24.233412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:41.705 [2024-12-05 12:54:24.233472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:41.705 [2024-12-05 12:54:24.233663] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:41.705 [2024-12-05 12:54:24.233678] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:41.705 [2024-12-05 12:54:24.233929] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:41.705 [2024-12-05 12:54:24.234078] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:41.705 [2024-12-05 12:54:24.234090] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:41.705 [2024-12-05 12:54:24.234223] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:41.705 "name": "raid_bdev1", 00:22:41.705 "uuid": "0092cdd3-2550-450d-b471-4528ecb192d0", 00:22:41.705 "strip_size_kb": 0, 00:22:41.705 "state": "online", 00:22:41.705 "raid_level": "raid1", 00:22:41.705 "superblock": true, 00:22:41.705 "num_base_bdevs": 4, 00:22:41.705 "num_base_bdevs_discovered": 4, 00:22:41.705 "num_base_bdevs_operational": 4, 00:22:41.705 "base_bdevs_list": [ 00:22:41.705 { 00:22:41.705 "name": "pt1", 00:22:41.705 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:41.705 "is_configured": true, 00:22:41.705 "data_offset": 2048, 00:22:41.705 "data_size": 63488 00:22:41.705 }, 00:22:41.705 { 00:22:41.705 "name": "pt2", 00:22:41.705 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:41.705 "is_configured": true, 00:22:41.705 "data_offset": 2048, 00:22:41.705 "data_size": 63488 00:22:41.705 }, 00:22:41.705 { 00:22:41.705 "name": "pt3", 00:22:41.705 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:41.705 "is_configured": true, 00:22:41.705 "data_offset": 2048, 00:22:41.705 "data_size": 63488 00:22:41.705 }, 00:22:41.705 { 00:22:41.705 "name": "pt4", 00:22:41.705 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:41.705 "is_configured": true, 00:22:41.705 "data_offset": 2048, 00:22:41.705 "data_size": 63488 00:22:41.705 } 00:22:41.705 ] 00:22:41.705 }' 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:41.705 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.269 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:22:42.269 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:42.269 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:42.269 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:42.269 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:42.269 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:42.269 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:42.269 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.269 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.269 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:42.269 [2024-12-05 12:54:24.559881] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:42.269 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.269 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:42.269 "name": "raid_bdev1", 00:22:42.269 "aliases": [ 00:22:42.269 "0092cdd3-2550-450d-b471-4528ecb192d0" 00:22:42.269 ], 00:22:42.269 "product_name": "Raid Volume", 00:22:42.269 "block_size": 512, 00:22:42.269 "num_blocks": 63488, 00:22:42.269 "uuid": "0092cdd3-2550-450d-b471-4528ecb192d0", 00:22:42.269 "assigned_rate_limits": { 00:22:42.269 "rw_ios_per_sec": 0, 00:22:42.269 "rw_mbytes_per_sec": 0, 00:22:42.269 "r_mbytes_per_sec": 0, 00:22:42.269 "w_mbytes_per_sec": 0 00:22:42.269 }, 00:22:42.269 "claimed": false, 00:22:42.269 "zoned": false, 00:22:42.269 "supported_io_types": { 00:22:42.269 "read": true, 00:22:42.269 "write": true, 00:22:42.269 "unmap": false, 00:22:42.269 "flush": false, 00:22:42.269 "reset": true, 00:22:42.269 "nvme_admin": false, 00:22:42.269 "nvme_io": false, 00:22:42.269 "nvme_io_md": false, 00:22:42.269 "write_zeroes": true, 00:22:42.269 "zcopy": false, 00:22:42.269 "get_zone_info": false, 00:22:42.269 "zone_management": false, 00:22:42.269 "zone_append": false, 00:22:42.269 "compare": false, 00:22:42.269 "compare_and_write": false, 00:22:42.269 "abort": false, 00:22:42.269 "seek_hole": false, 00:22:42.269 "seek_data": false, 00:22:42.269 "copy": false, 00:22:42.269 "nvme_iov_md": false 00:22:42.269 }, 00:22:42.269 "memory_domains": [ 00:22:42.269 { 00:22:42.269 "dma_device_id": "system", 00:22:42.269 "dma_device_type": 1 00:22:42.269 }, 00:22:42.269 { 00:22:42.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:42.269 "dma_device_type": 2 00:22:42.269 }, 00:22:42.269 { 00:22:42.269 "dma_device_id": "system", 00:22:42.269 "dma_device_type": 1 00:22:42.269 }, 00:22:42.269 { 00:22:42.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:42.269 "dma_device_type": 2 00:22:42.269 }, 00:22:42.269 { 00:22:42.269 "dma_device_id": "system", 00:22:42.269 "dma_device_type": 1 00:22:42.269 }, 00:22:42.269 { 00:22:42.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:42.269 "dma_device_type": 2 00:22:42.269 }, 00:22:42.269 { 00:22:42.269 "dma_device_id": "system", 00:22:42.269 "dma_device_type": 1 00:22:42.269 }, 00:22:42.269 { 00:22:42.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:42.269 "dma_device_type": 2 00:22:42.269 } 00:22:42.269 ], 00:22:42.269 "driver_specific": { 00:22:42.269 "raid": { 00:22:42.269 "uuid": "0092cdd3-2550-450d-b471-4528ecb192d0", 00:22:42.269 "strip_size_kb": 0, 00:22:42.269 "state": "online", 00:22:42.269 "raid_level": "raid1", 00:22:42.269 "superblock": true, 00:22:42.269 "num_base_bdevs": 4, 00:22:42.269 "num_base_bdevs_discovered": 4, 00:22:42.269 "num_base_bdevs_operational": 4, 00:22:42.269 "base_bdevs_list": [ 00:22:42.269 { 00:22:42.269 "name": "pt1", 00:22:42.269 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:42.270 "is_configured": true, 00:22:42.270 "data_offset": 2048, 00:22:42.270 "data_size": 63488 00:22:42.270 }, 00:22:42.270 { 00:22:42.270 "name": "pt2", 00:22:42.270 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:42.270 "is_configured": true, 00:22:42.270 "data_offset": 2048, 00:22:42.270 "data_size": 63488 00:22:42.270 }, 00:22:42.270 { 00:22:42.270 "name": "pt3", 00:22:42.270 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:42.270 "is_configured": true, 00:22:42.270 "data_offset": 2048, 00:22:42.270 "data_size": 63488 00:22:42.270 }, 00:22:42.270 { 00:22:42.270 "name": "pt4", 00:22:42.270 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:42.270 "is_configured": true, 00:22:42.270 "data_offset": 2048, 00:22:42.270 "data_size": 63488 00:22:42.270 } 00:22:42.270 ] 00:22:42.270 } 00:22:42.270 } 00:22:42.270 }' 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:42.270 pt2 00:22:42.270 pt3 00:22:42.270 pt4' 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.270 [2024-12-05 12:54:24.819913] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0092cdd3-2550-450d-b471-4528ecb192d0 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0092cdd3-2550-450d-b471-4528ecb192d0 ']' 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.270 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.270 [2024-12-05 12:54:24.847578] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:42.270 [2024-12-05 12:54:24.847598] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:42.270 [2024-12-05 12:54:24.847665] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:42.270 [2024-12-05 12:54:24.847752] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:42.270 [2024-12-05 12:54:24.847766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.528 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.528 [2024-12-05 12:54:24.959627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:42.528 [2024-12-05 12:54:24.961477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:42.528 [2024-12-05 12:54:24.961534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:42.528 [2024-12-05 12:54:24.961570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:22:42.528 [2024-12-05 12:54:24.961616] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:42.528 [2024-12-05 12:54:24.961663] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:42.528 [2024-12-05 12:54:24.961681] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:22:42.528 [2024-12-05 12:54:24.961699] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:22:42.528 [2024-12-05 12:54:24.961711] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:42.528 [2024-12-05 12:54:24.961722] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:22:42.528 request: 00:22:42.528 { 00:22:42.528 "name": "raid_bdev1", 00:22:42.528 "raid_level": "raid1", 00:22:42.528 "base_bdevs": [ 00:22:42.528 "malloc1", 00:22:42.528 "malloc2", 00:22:42.528 "malloc3", 00:22:42.528 "malloc4" 00:22:42.528 ], 00:22:42.528 "superblock": false, 00:22:42.528 "method": "bdev_raid_create", 00:22:42.528 "req_id": 1 00:22:42.528 } 00:22:42.528 Got JSON-RPC error response 00:22:42.528 response: 00:22:42.529 { 00:22:42.529 "code": -17, 00:22:42.529 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:42.529 } 00:22:42.529 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:42.529 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:22:42.529 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:42.529 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:42.529 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:42.529 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:42.529 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:42.529 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.529 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.529 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.529 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:42.529 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:42.529 12:54:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:42.529 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.529 12:54:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.529 [2024-12-05 12:54:25.003618] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:42.529 [2024-12-05 12:54:25.003665] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:42.529 [2024-12-05 12:54:25.003680] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:42.529 [2024-12-05 12:54:25.003690] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:42.529 [2024-12-05 12:54:25.005807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:42.529 [2024-12-05 12:54:25.005844] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:42.529 [2024-12-05 12:54:25.005917] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:42.529 [2024-12-05 12:54:25.005967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:42.529 pt1 00:22:42.529 12:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.529 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:22:42.529 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:42.529 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:42.529 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:42.529 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:42.529 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:42.529 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:42.529 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:42.529 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:42.529 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:42.529 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:42.529 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:42.529 12:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.529 12:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.529 12:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.529 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:42.529 "name": "raid_bdev1", 00:22:42.529 "uuid": "0092cdd3-2550-450d-b471-4528ecb192d0", 00:22:42.529 "strip_size_kb": 0, 00:22:42.529 "state": "configuring", 00:22:42.529 "raid_level": "raid1", 00:22:42.529 "superblock": true, 00:22:42.529 "num_base_bdevs": 4, 00:22:42.529 "num_base_bdevs_discovered": 1, 00:22:42.529 "num_base_bdevs_operational": 4, 00:22:42.529 "base_bdevs_list": [ 00:22:42.529 { 00:22:42.529 "name": "pt1", 00:22:42.529 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:42.529 "is_configured": true, 00:22:42.529 "data_offset": 2048, 00:22:42.529 "data_size": 63488 00:22:42.529 }, 00:22:42.529 { 00:22:42.529 "name": null, 00:22:42.529 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:42.529 "is_configured": false, 00:22:42.529 "data_offset": 2048, 00:22:42.529 "data_size": 63488 00:22:42.529 }, 00:22:42.529 { 00:22:42.529 "name": null, 00:22:42.529 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:42.529 "is_configured": false, 00:22:42.529 "data_offset": 2048, 00:22:42.529 "data_size": 63488 00:22:42.529 }, 00:22:42.529 { 00:22:42.529 "name": null, 00:22:42.529 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:42.529 "is_configured": false, 00:22:42.529 "data_offset": 2048, 00:22:42.529 "data_size": 63488 00:22:42.529 } 00:22:42.529 ] 00:22:42.529 }' 00:22:42.529 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:42.529 12:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.786 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:22:42.786 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:42.786 12:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.786 12:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.786 [2024-12-05 12:54:25.311712] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:42.786 [2024-12-05 12:54:25.311776] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:42.786 [2024-12-05 12:54:25.311795] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:22:42.786 [2024-12-05 12:54:25.311806] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:42.786 [2024-12-05 12:54:25.312214] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:42.786 [2024-12-05 12:54:25.312230] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:42.786 [2024-12-05 12:54:25.312296] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:42.786 [2024-12-05 12:54:25.312317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:42.786 pt2 00:22:42.786 12:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.786 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:22:42.786 12:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.786 12:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.786 [2024-12-05 12:54:25.319711] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:42.786 12:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.786 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:22:42.787 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:42.787 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:42.787 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:42.787 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:42.787 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:42.787 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:42.787 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:42.787 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:42.787 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:42.787 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:42.787 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:42.787 12:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.787 12:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.787 12:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.787 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:42.787 "name": "raid_bdev1", 00:22:42.787 "uuid": "0092cdd3-2550-450d-b471-4528ecb192d0", 00:22:42.787 "strip_size_kb": 0, 00:22:42.787 "state": "configuring", 00:22:42.787 "raid_level": "raid1", 00:22:42.787 "superblock": true, 00:22:42.787 "num_base_bdevs": 4, 00:22:42.787 "num_base_bdevs_discovered": 1, 00:22:42.787 "num_base_bdevs_operational": 4, 00:22:42.787 "base_bdevs_list": [ 00:22:42.787 { 00:22:42.787 "name": "pt1", 00:22:42.787 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:42.787 "is_configured": true, 00:22:42.787 "data_offset": 2048, 00:22:42.787 "data_size": 63488 00:22:42.787 }, 00:22:42.787 { 00:22:42.787 "name": null, 00:22:42.787 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:42.787 "is_configured": false, 00:22:42.787 "data_offset": 0, 00:22:42.787 "data_size": 63488 00:22:42.787 }, 00:22:42.787 { 00:22:42.787 "name": null, 00:22:42.787 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:42.787 "is_configured": false, 00:22:42.787 "data_offset": 2048, 00:22:42.787 "data_size": 63488 00:22:42.787 }, 00:22:42.787 { 00:22:42.787 "name": null, 00:22:42.787 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:42.787 "is_configured": false, 00:22:42.787 "data_offset": 2048, 00:22:42.787 "data_size": 63488 00:22:42.787 } 00:22:42.787 ] 00:22:42.787 }' 00:22:42.787 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:42.787 12:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.350 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:43.350 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:43.350 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:43.350 12:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.350 12:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.350 [2024-12-05 12:54:25.643788] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:43.350 [2024-12-05 12:54:25.643850] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:43.350 [2024-12-05 12:54:25.643869] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:22:43.350 [2024-12-05 12:54:25.643879] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:43.350 [2024-12-05 12:54:25.644282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:43.350 [2024-12-05 12:54:25.644302] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:43.350 [2024-12-05 12:54:25.644373] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:43.350 [2024-12-05 12:54:25.644392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:43.350 pt2 00:22:43.350 12:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.350 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:43.350 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:43.350 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:43.350 12:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.351 12:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.351 [2024-12-05 12:54:25.651770] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:43.351 [2024-12-05 12:54:25.651809] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:43.351 [2024-12-05 12:54:25.651832] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:22:43.351 [2024-12-05 12:54:25.651841] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:43.351 [2024-12-05 12:54:25.652184] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:43.351 [2024-12-05 12:54:25.652201] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:43.351 [2024-12-05 12:54:25.652263] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:43.351 [2024-12-05 12:54:25.652279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:43.351 pt3 00:22:43.351 12:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.351 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:43.351 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:43.351 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:43.351 12:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.351 12:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.351 [2024-12-05 12:54:25.659750] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:43.351 [2024-12-05 12:54:25.659785] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:43.351 [2024-12-05 12:54:25.659799] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:22:43.351 [2024-12-05 12:54:25.659806] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:43.351 [2024-12-05 12:54:25.660160] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:43.351 [2024-12-05 12:54:25.660183] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:43.351 [2024-12-05 12:54:25.660238] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:22:43.351 [2024-12-05 12:54:25.660257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:43.351 [2024-12-05 12:54:25.660384] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:43.351 [2024-12-05 12:54:25.660396] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:43.351 [2024-12-05 12:54:25.660661] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:43.351 [2024-12-05 12:54:25.660797] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:43.351 [2024-12-05 12:54:25.660807] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:22:43.351 [2024-12-05 12:54:25.660926] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:43.351 pt4 00:22:43.351 12:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.351 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:43.351 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:43.351 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:43.351 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:43.351 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:43.351 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:43.351 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:43.351 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:43.351 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:43.351 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:43.351 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:43.351 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:43.351 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.351 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.351 12:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.351 12:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.351 12:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.351 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:43.351 "name": "raid_bdev1", 00:22:43.351 "uuid": "0092cdd3-2550-450d-b471-4528ecb192d0", 00:22:43.351 "strip_size_kb": 0, 00:22:43.351 "state": "online", 00:22:43.351 "raid_level": "raid1", 00:22:43.351 "superblock": true, 00:22:43.351 "num_base_bdevs": 4, 00:22:43.351 "num_base_bdevs_discovered": 4, 00:22:43.351 "num_base_bdevs_operational": 4, 00:22:43.351 "base_bdevs_list": [ 00:22:43.351 { 00:22:43.351 "name": "pt1", 00:22:43.351 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:43.351 "is_configured": true, 00:22:43.351 "data_offset": 2048, 00:22:43.351 "data_size": 63488 00:22:43.351 }, 00:22:43.351 { 00:22:43.351 "name": "pt2", 00:22:43.351 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:43.351 "is_configured": true, 00:22:43.351 "data_offset": 2048, 00:22:43.351 "data_size": 63488 00:22:43.351 }, 00:22:43.351 { 00:22:43.351 "name": "pt3", 00:22:43.351 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:43.351 "is_configured": true, 00:22:43.351 "data_offset": 2048, 00:22:43.351 "data_size": 63488 00:22:43.351 }, 00:22:43.351 { 00:22:43.351 "name": "pt4", 00:22:43.351 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:43.351 "is_configured": true, 00:22:43.351 "data_offset": 2048, 00:22:43.351 "data_size": 63488 00:22:43.351 } 00:22:43.351 ] 00:22:43.351 }' 00:22:43.351 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:43.351 12:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.609 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:22:43.609 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:43.609 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:43.609 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:43.609 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:43.609 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:43.609 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:43.609 12:54:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:43.609 12:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.609 12:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.609 [2024-12-05 12:54:25.980218] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:43.609 12:54:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.609 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:43.609 "name": "raid_bdev1", 00:22:43.609 "aliases": [ 00:22:43.609 "0092cdd3-2550-450d-b471-4528ecb192d0" 00:22:43.609 ], 00:22:43.609 "product_name": "Raid Volume", 00:22:43.609 "block_size": 512, 00:22:43.609 "num_blocks": 63488, 00:22:43.609 "uuid": "0092cdd3-2550-450d-b471-4528ecb192d0", 00:22:43.609 "assigned_rate_limits": { 00:22:43.609 "rw_ios_per_sec": 0, 00:22:43.609 "rw_mbytes_per_sec": 0, 00:22:43.609 "r_mbytes_per_sec": 0, 00:22:43.609 "w_mbytes_per_sec": 0 00:22:43.609 }, 00:22:43.609 "claimed": false, 00:22:43.609 "zoned": false, 00:22:43.609 "supported_io_types": { 00:22:43.609 "read": true, 00:22:43.609 "write": true, 00:22:43.609 "unmap": false, 00:22:43.609 "flush": false, 00:22:43.609 "reset": true, 00:22:43.609 "nvme_admin": false, 00:22:43.609 "nvme_io": false, 00:22:43.609 "nvme_io_md": false, 00:22:43.609 "write_zeroes": true, 00:22:43.609 "zcopy": false, 00:22:43.609 "get_zone_info": false, 00:22:43.609 "zone_management": false, 00:22:43.609 "zone_append": false, 00:22:43.609 "compare": false, 00:22:43.609 "compare_and_write": false, 00:22:43.609 "abort": false, 00:22:43.609 "seek_hole": false, 00:22:43.609 "seek_data": false, 00:22:43.609 "copy": false, 00:22:43.609 "nvme_iov_md": false 00:22:43.609 }, 00:22:43.609 "memory_domains": [ 00:22:43.609 { 00:22:43.609 "dma_device_id": "system", 00:22:43.609 "dma_device_type": 1 00:22:43.609 }, 00:22:43.609 { 00:22:43.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:43.609 "dma_device_type": 2 00:22:43.609 }, 00:22:43.609 { 00:22:43.609 "dma_device_id": "system", 00:22:43.609 "dma_device_type": 1 00:22:43.609 }, 00:22:43.609 { 00:22:43.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:43.609 "dma_device_type": 2 00:22:43.609 }, 00:22:43.609 { 00:22:43.609 "dma_device_id": "system", 00:22:43.609 "dma_device_type": 1 00:22:43.609 }, 00:22:43.609 { 00:22:43.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:43.609 "dma_device_type": 2 00:22:43.609 }, 00:22:43.609 { 00:22:43.609 "dma_device_id": "system", 00:22:43.609 "dma_device_type": 1 00:22:43.609 }, 00:22:43.609 { 00:22:43.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:43.609 "dma_device_type": 2 00:22:43.609 } 00:22:43.609 ], 00:22:43.609 "driver_specific": { 00:22:43.609 "raid": { 00:22:43.609 "uuid": "0092cdd3-2550-450d-b471-4528ecb192d0", 00:22:43.609 "strip_size_kb": 0, 00:22:43.609 "state": "online", 00:22:43.609 "raid_level": "raid1", 00:22:43.609 "superblock": true, 00:22:43.609 "num_base_bdevs": 4, 00:22:43.609 "num_base_bdevs_discovered": 4, 00:22:43.609 "num_base_bdevs_operational": 4, 00:22:43.609 "base_bdevs_list": [ 00:22:43.609 { 00:22:43.609 "name": "pt1", 00:22:43.609 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:43.609 "is_configured": true, 00:22:43.609 "data_offset": 2048, 00:22:43.609 "data_size": 63488 00:22:43.609 }, 00:22:43.609 { 00:22:43.609 "name": "pt2", 00:22:43.609 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:43.609 "is_configured": true, 00:22:43.609 "data_offset": 2048, 00:22:43.609 "data_size": 63488 00:22:43.609 }, 00:22:43.609 { 00:22:43.609 "name": "pt3", 00:22:43.609 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:43.609 "is_configured": true, 00:22:43.609 "data_offset": 2048, 00:22:43.609 "data_size": 63488 00:22:43.609 }, 00:22:43.609 { 00:22:43.609 "name": "pt4", 00:22:43.609 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:43.609 "is_configured": true, 00:22:43.609 "data_offset": 2048, 00:22:43.609 "data_size": 63488 00:22:43.609 } 00:22:43.609 ] 00:22:43.609 } 00:22:43.609 } 00:22:43.609 }' 00:22:43.609 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:43.609 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:43.609 pt2 00:22:43.609 pt3 00:22:43.609 pt4' 00:22:43.609 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:43.609 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:43.609 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:43.609 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:43.609 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:43.609 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.609 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.609 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.609 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:43.609 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:43.609 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:43.609 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:43.609 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.609 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.609 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:43.609 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.609 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:43.609 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:43.610 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:43.610 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:22:43.610 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.610 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.610 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:43.610 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.610 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:43.610 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:43.610 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:43.610 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:22:43.610 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.610 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.610 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:43.610 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.912 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:43.912 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:43.912 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:22:43.912 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:43.912 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.912 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.912 [2024-12-05 12:54:26.204208] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:43.912 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.912 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0092cdd3-2550-450d-b471-4528ecb192d0 '!=' 0092cdd3-2550-450d-b471-4528ecb192d0 ']' 00:22:43.912 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:22:43.912 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:43.912 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:22:43.912 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:22:43.912 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.912 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.912 [2024-12-05 12:54:26.235944] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:43.912 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.912 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:43.912 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:43.912 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:43.912 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:43.912 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:43.912 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:43.912 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:43.912 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:43.912 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:43.912 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:43.912 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.912 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.912 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.912 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.912 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.912 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:43.912 "name": "raid_bdev1", 00:22:43.912 "uuid": "0092cdd3-2550-450d-b471-4528ecb192d0", 00:22:43.912 "strip_size_kb": 0, 00:22:43.913 "state": "online", 00:22:43.913 "raid_level": "raid1", 00:22:43.913 "superblock": true, 00:22:43.913 "num_base_bdevs": 4, 00:22:43.913 "num_base_bdevs_discovered": 3, 00:22:43.913 "num_base_bdevs_operational": 3, 00:22:43.913 "base_bdevs_list": [ 00:22:43.913 { 00:22:43.913 "name": null, 00:22:43.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.913 "is_configured": false, 00:22:43.913 "data_offset": 0, 00:22:43.913 "data_size": 63488 00:22:43.913 }, 00:22:43.913 { 00:22:43.913 "name": "pt2", 00:22:43.913 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:43.913 "is_configured": true, 00:22:43.913 "data_offset": 2048, 00:22:43.913 "data_size": 63488 00:22:43.913 }, 00:22:43.913 { 00:22:43.913 "name": "pt3", 00:22:43.913 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:43.913 "is_configured": true, 00:22:43.913 "data_offset": 2048, 00:22:43.913 "data_size": 63488 00:22:43.913 }, 00:22:43.913 { 00:22:43.913 "name": "pt4", 00:22:43.913 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:43.913 "is_configured": true, 00:22:43.913 "data_offset": 2048, 00:22:43.913 "data_size": 63488 00:22:43.913 } 00:22:43.913 ] 00:22:43.913 }' 00:22:43.913 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:43.913 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.170 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.171 [2024-12-05 12:54:26.567985] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:44.171 [2024-12-05 12:54:26.568010] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:44.171 [2024-12-05 12:54:26.568066] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:44.171 [2024-12-05 12:54:26.568131] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:44.171 [2024-12-05 12:54:26.568138] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.171 [2024-12-05 12:54:26.631986] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:44.171 [2024-12-05 12:54:26.632031] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:44.171 [2024-12-05 12:54:26.632045] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:22:44.171 [2024-12-05 12:54:26.632053] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:44.171 [2024-12-05 12:54:26.633902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:44.171 [2024-12-05 12:54:26.634021] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:44.171 [2024-12-05 12:54:26.634095] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:44.171 [2024-12-05 12:54:26.634129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:44.171 pt2 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:44.171 "name": "raid_bdev1", 00:22:44.171 "uuid": "0092cdd3-2550-450d-b471-4528ecb192d0", 00:22:44.171 "strip_size_kb": 0, 00:22:44.171 "state": "configuring", 00:22:44.171 "raid_level": "raid1", 00:22:44.171 "superblock": true, 00:22:44.171 "num_base_bdevs": 4, 00:22:44.171 "num_base_bdevs_discovered": 1, 00:22:44.171 "num_base_bdevs_operational": 3, 00:22:44.171 "base_bdevs_list": [ 00:22:44.171 { 00:22:44.171 "name": null, 00:22:44.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.171 "is_configured": false, 00:22:44.171 "data_offset": 2048, 00:22:44.171 "data_size": 63488 00:22:44.171 }, 00:22:44.171 { 00:22:44.171 "name": "pt2", 00:22:44.171 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:44.171 "is_configured": true, 00:22:44.171 "data_offset": 2048, 00:22:44.171 "data_size": 63488 00:22:44.171 }, 00:22:44.171 { 00:22:44.171 "name": null, 00:22:44.171 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:44.171 "is_configured": false, 00:22:44.171 "data_offset": 2048, 00:22:44.171 "data_size": 63488 00:22:44.171 }, 00:22:44.171 { 00:22:44.171 "name": null, 00:22:44.171 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:44.171 "is_configured": false, 00:22:44.171 "data_offset": 2048, 00:22:44.171 "data_size": 63488 00:22:44.171 } 00:22:44.171 ] 00:22:44.171 }' 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:44.171 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.431 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:22:44.431 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:44.431 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:44.431 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.431 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.431 [2024-12-05 12:54:26.936056] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:44.431 [2024-12-05 12:54:26.936105] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:44.431 [2024-12-05 12:54:26.936122] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:22:44.431 [2024-12-05 12:54:26.936129] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:44.431 [2024-12-05 12:54:26.936469] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:44.431 [2024-12-05 12:54:26.936479] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:44.431 [2024-12-05 12:54:26.936568] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:44.431 [2024-12-05 12:54:26.936586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:44.431 pt3 00:22:44.431 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.431 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:22:44.431 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:44.431 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:44.431 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:44.431 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:44.431 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:44.431 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:44.431 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:44.431 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:44.431 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:44.431 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:44.431 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.431 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.431 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.431 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.431 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:44.431 "name": "raid_bdev1", 00:22:44.431 "uuid": "0092cdd3-2550-450d-b471-4528ecb192d0", 00:22:44.431 "strip_size_kb": 0, 00:22:44.431 "state": "configuring", 00:22:44.431 "raid_level": "raid1", 00:22:44.431 "superblock": true, 00:22:44.431 "num_base_bdevs": 4, 00:22:44.431 "num_base_bdevs_discovered": 2, 00:22:44.431 "num_base_bdevs_operational": 3, 00:22:44.431 "base_bdevs_list": [ 00:22:44.431 { 00:22:44.431 "name": null, 00:22:44.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.431 "is_configured": false, 00:22:44.431 "data_offset": 2048, 00:22:44.431 "data_size": 63488 00:22:44.431 }, 00:22:44.431 { 00:22:44.431 "name": "pt2", 00:22:44.431 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:44.431 "is_configured": true, 00:22:44.431 "data_offset": 2048, 00:22:44.431 "data_size": 63488 00:22:44.431 }, 00:22:44.431 { 00:22:44.431 "name": "pt3", 00:22:44.431 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:44.431 "is_configured": true, 00:22:44.431 "data_offset": 2048, 00:22:44.431 "data_size": 63488 00:22:44.431 }, 00:22:44.431 { 00:22:44.431 "name": null, 00:22:44.431 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:44.431 "is_configured": false, 00:22:44.431 "data_offset": 2048, 00:22:44.431 "data_size": 63488 00:22:44.431 } 00:22:44.431 ] 00:22:44.431 }' 00:22:44.431 12:54:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:44.431 12:54:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.689 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:22:44.689 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:44.689 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:22:44.689 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:44.689 12:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.689 12:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.689 [2024-12-05 12:54:27.264131] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:44.689 [2024-12-05 12:54:27.264300] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:44.689 [2024-12-05 12:54:27.264324] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:22:44.689 [2024-12-05 12:54:27.264331] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:44.689 [2024-12-05 12:54:27.264679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:44.689 [2024-12-05 12:54:27.264691] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:44.689 [2024-12-05 12:54:27.264752] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:22:44.689 [2024-12-05 12:54:27.264768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:44.689 [2024-12-05 12:54:27.264866] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:44.689 [2024-12-05 12:54:27.264872] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:44.689 [2024-12-05 12:54:27.265070] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:22:44.689 [2024-12-05 12:54:27.265180] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:44.689 [2024-12-05 12:54:27.265188] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:22:44.689 [2024-12-05 12:54:27.265288] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:44.689 pt4 00:22:44.689 12:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.689 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:44.689 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:44.689 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:44.689 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:44.689 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:44.689 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:44.689 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:44.689 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:44.689 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:44.689 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:44.947 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:44.947 12:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.947 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.947 12:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.947 12:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.947 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:44.947 "name": "raid_bdev1", 00:22:44.947 "uuid": "0092cdd3-2550-450d-b471-4528ecb192d0", 00:22:44.947 "strip_size_kb": 0, 00:22:44.947 "state": "online", 00:22:44.947 "raid_level": "raid1", 00:22:44.947 "superblock": true, 00:22:44.947 "num_base_bdevs": 4, 00:22:44.947 "num_base_bdevs_discovered": 3, 00:22:44.947 "num_base_bdevs_operational": 3, 00:22:44.947 "base_bdevs_list": [ 00:22:44.947 { 00:22:44.947 "name": null, 00:22:44.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.947 "is_configured": false, 00:22:44.947 "data_offset": 2048, 00:22:44.947 "data_size": 63488 00:22:44.947 }, 00:22:44.947 { 00:22:44.947 "name": "pt2", 00:22:44.947 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:44.947 "is_configured": true, 00:22:44.947 "data_offset": 2048, 00:22:44.947 "data_size": 63488 00:22:44.947 }, 00:22:44.947 { 00:22:44.947 "name": "pt3", 00:22:44.947 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:44.947 "is_configured": true, 00:22:44.947 "data_offset": 2048, 00:22:44.947 "data_size": 63488 00:22:44.947 }, 00:22:44.947 { 00:22:44.947 "name": "pt4", 00:22:44.947 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:44.947 "is_configured": true, 00:22:44.947 "data_offset": 2048, 00:22:44.947 "data_size": 63488 00:22:44.947 } 00:22:44.947 ] 00:22:44.947 }' 00:22:44.947 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:44.947 12:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.205 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:45.205 12:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.205 12:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.205 [2024-12-05 12:54:27.572161] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:45.205 [2024-12-05 12:54:27.572184] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:45.205 [2024-12-05 12:54:27.572240] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:45.205 [2024-12-05 12:54:27.572302] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:45.205 [2024-12-05 12:54:27.572312] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:22:45.205 12:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.205 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:22:45.205 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:45.205 12:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.205 12:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.205 12:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.205 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:22:45.205 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:22:45.205 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:22:45.205 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:22:45.205 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:22:45.206 12:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.206 12:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.206 12:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.206 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:45.206 12:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.206 12:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.206 [2024-12-05 12:54:27.616159] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:45.206 [2024-12-05 12:54:27.616205] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:45.206 [2024-12-05 12:54:27.616217] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:22:45.206 [2024-12-05 12:54:27.616228] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:45.206 [2024-12-05 12:54:27.618064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:45.206 [2024-12-05 12:54:27.618096] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:45.206 [2024-12-05 12:54:27.618158] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:45.206 [2024-12-05 12:54:27.618193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:45.206 [2024-12-05 12:54:27.618292] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:45.206 [2024-12-05 12:54:27.618302] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:45.206 [2024-12-05 12:54:27.618315] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:22:45.206 [2024-12-05 12:54:27.618359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:45.206 [2024-12-05 12:54:27.618436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:45.206 pt1 00:22:45.206 12:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.206 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:22:45.206 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:22:45.206 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:45.206 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:45.206 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:45.206 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:45.206 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:45.206 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:45.206 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:45.206 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:45.206 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:45.206 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:45.206 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.206 12:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.206 12:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.206 12:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.206 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:45.206 "name": "raid_bdev1", 00:22:45.206 "uuid": "0092cdd3-2550-450d-b471-4528ecb192d0", 00:22:45.206 "strip_size_kb": 0, 00:22:45.206 "state": "configuring", 00:22:45.206 "raid_level": "raid1", 00:22:45.206 "superblock": true, 00:22:45.206 "num_base_bdevs": 4, 00:22:45.206 "num_base_bdevs_discovered": 2, 00:22:45.206 "num_base_bdevs_operational": 3, 00:22:45.206 "base_bdevs_list": [ 00:22:45.206 { 00:22:45.206 "name": null, 00:22:45.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.206 "is_configured": false, 00:22:45.206 "data_offset": 2048, 00:22:45.206 "data_size": 63488 00:22:45.206 }, 00:22:45.206 { 00:22:45.206 "name": "pt2", 00:22:45.206 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:45.206 "is_configured": true, 00:22:45.206 "data_offset": 2048, 00:22:45.206 "data_size": 63488 00:22:45.206 }, 00:22:45.206 { 00:22:45.206 "name": "pt3", 00:22:45.206 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:45.206 "is_configured": true, 00:22:45.206 "data_offset": 2048, 00:22:45.206 "data_size": 63488 00:22:45.206 }, 00:22:45.206 { 00:22:45.206 "name": null, 00:22:45.206 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:45.206 "is_configured": false, 00:22:45.206 "data_offset": 2048, 00:22:45.206 "data_size": 63488 00:22:45.206 } 00:22:45.206 ] 00:22:45.206 }' 00:22:45.206 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:45.206 12:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.465 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:45.465 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:22:45.465 12:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.465 12:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.465 12:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.465 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:22:45.465 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:45.465 12:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.465 12:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.465 [2024-12-05 12:54:27.948229] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:45.465 [2024-12-05 12:54:27.948276] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:45.465 [2024-12-05 12:54:27.948293] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:22:45.465 [2024-12-05 12:54:27.948300] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:45.465 [2024-12-05 12:54:27.948654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:45.465 [2024-12-05 12:54:27.948666] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:45.465 [2024-12-05 12:54:27.948725] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:22:45.465 [2024-12-05 12:54:27.948740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:45.465 [2024-12-05 12:54:27.948837] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:22:45.465 [2024-12-05 12:54:27.948844] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:45.465 [2024-12-05 12:54:27.949043] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:22:45.465 [2024-12-05 12:54:27.949149] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:22:45.465 [2024-12-05 12:54:27.949161] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:22:45.465 [2024-12-05 12:54:27.949264] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:45.465 pt4 00:22:45.465 12:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.465 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:45.465 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:45.465 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:45.465 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:45.465 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:45.465 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:45.465 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:45.465 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:45.465 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:45.465 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:45.465 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:45.465 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.465 12:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.465 12:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.465 12:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.465 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:45.465 "name": "raid_bdev1", 00:22:45.465 "uuid": "0092cdd3-2550-450d-b471-4528ecb192d0", 00:22:45.465 "strip_size_kb": 0, 00:22:45.465 "state": "online", 00:22:45.465 "raid_level": "raid1", 00:22:45.465 "superblock": true, 00:22:45.465 "num_base_bdevs": 4, 00:22:45.465 "num_base_bdevs_discovered": 3, 00:22:45.465 "num_base_bdevs_operational": 3, 00:22:45.465 "base_bdevs_list": [ 00:22:45.465 { 00:22:45.465 "name": null, 00:22:45.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.465 "is_configured": false, 00:22:45.465 "data_offset": 2048, 00:22:45.465 "data_size": 63488 00:22:45.465 }, 00:22:45.465 { 00:22:45.465 "name": "pt2", 00:22:45.465 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:45.465 "is_configured": true, 00:22:45.465 "data_offset": 2048, 00:22:45.465 "data_size": 63488 00:22:45.465 }, 00:22:45.465 { 00:22:45.465 "name": "pt3", 00:22:45.465 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:45.465 "is_configured": true, 00:22:45.465 "data_offset": 2048, 00:22:45.465 "data_size": 63488 00:22:45.465 }, 00:22:45.465 { 00:22:45.465 "name": "pt4", 00:22:45.465 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:45.465 "is_configured": true, 00:22:45.465 "data_offset": 2048, 00:22:45.465 "data_size": 63488 00:22:45.465 } 00:22:45.465 ] 00:22:45.465 }' 00:22:45.465 12:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:45.465 12:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.724 12:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:45.724 12:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:22:45.724 12:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.724 12:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.724 12:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.724 12:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:22:45.724 12:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:45.724 12:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:22:45.724 12:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.724 12:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.724 [2024-12-05 12:54:28.292560] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:45.724 12:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.981 12:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 0092cdd3-2550-450d-b471-4528ecb192d0 '!=' 0092cdd3-2550-450d-b471-4528ecb192d0 ']' 00:22:45.981 12:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72356 00:22:45.981 12:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72356 ']' 00:22:45.981 12:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72356 00:22:45.981 12:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:22:45.981 12:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:45.981 12:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72356 00:22:45.981 killing process with pid 72356 00:22:45.981 12:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:45.981 12:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:45.981 12:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72356' 00:22:45.981 12:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72356 00:22:45.981 [2024-12-05 12:54:28.336924] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:45.981 12:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72356 00:22:45.981 [2024-12-05 12:54:28.336990] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:45.981 [2024-12-05 12:54:28.337053] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:45.981 [2024-12-05 12:54:28.337063] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:22:45.981 [2024-12-05 12:54:28.528445] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:46.547 12:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:22:46.547 00:22:46.547 real 0m5.958s 00:22:46.547 user 0m9.507s 00:22:46.547 sys 0m0.993s 00:22:46.547 12:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:46.547 ************************************ 00:22:46.547 END TEST raid_superblock_test 00:22:46.547 ************************************ 00:22:46.547 12:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.805 12:54:29 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:22:46.805 12:54:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:46.805 12:54:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:46.805 12:54:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:46.805 ************************************ 00:22:46.805 START TEST raid_read_error_test 00:22:46.805 ************************************ 00:22:46.805 12:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:22:46.805 12:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:22:46.805 12:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:22:46.805 12:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:22:46.805 12:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:22:46.805 12:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:46.805 12:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:22:46.805 12:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:46.805 12:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:46.805 12:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:22:46.805 12:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:46.805 12:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:46.805 12:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:22:46.805 12:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:46.805 12:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:46.805 12:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:22:46.805 12:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:46.805 12:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:46.805 12:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:46.805 12:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:22:46.805 12:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:22:46.805 12:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:22:46.805 12:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:22:46.805 12:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:22:46.805 12:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:22:46.805 12:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:22:46.805 12:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:22:46.805 12:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:22:46.805 12:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LK4QFg7riM 00:22:46.805 12:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72821 00:22:46.805 12:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72821 00:22:46.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.805 12:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72821 ']' 00:22:46.805 12:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.805 12:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:46.805 12:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.805 12:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:46.805 12:54:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.806 12:54:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:46.806 [2024-12-05 12:54:29.226143] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:22:46.806 [2024-12-05 12:54:29.226261] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72821 ] 00:22:46.806 [2024-12-05 12:54:29.379242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.063 [2024-12-05 12:54:29.464110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.063 [2024-12-05 12:54:29.573303] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:47.063 [2024-12-05 12:54:29.573473] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:47.667 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:47.667 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:22:47.667 12:54:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:47.667 12:54:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:47.667 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.667 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.667 BaseBdev1_malloc 00:22:47.667 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.667 12:54:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:22:47.667 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.667 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.667 true 00:22:47.667 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.667 12:54:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:47.667 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.667 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.667 [2024-12-05 12:54:30.101990] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:47.667 [2024-12-05 12:54:30.102036] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:47.667 [2024-12-05 12:54:30.102052] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:22:47.667 [2024-12-05 12:54:30.102060] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:47.667 [2024-12-05 12:54:30.103786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:47.667 [2024-12-05 12:54:30.103816] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:47.667 BaseBdev1 00:22:47.667 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.667 12:54:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:47.667 12:54:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:47.667 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.667 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.667 BaseBdev2_malloc 00:22:47.667 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.667 12:54:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:22:47.667 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.667 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.667 true 00:22:47.667 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.667 12:54:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:47.667 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.667 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.667 [2024-12-05 12:54:30.141393] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:47.667 [2024-12-05 12:54:30.141560] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:47.667 [2024-12-05 12:54:30.141579] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:22:47.667 [2024-12-05 12:54:30.141588] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:47.667 [2024-12-05 12:54:30.143281] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:47.667 [2024-12-05 12:54:30.143307] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:47.667 BaseBdev2 00:22:47.667 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.667 12:54:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:47.667 12:54:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:47.668 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.668 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.668 BaseBdev3_malloc 00:22:47.668 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.668 12:54:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:22:47.668 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.668 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.668 true 00:22:47.668 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.668 12:54:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:47.668 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.668 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.668 [2024-12-05 12:54:30.195964] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:47.668 [2024-12-05 12:54:30.196004] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:47.668 [2024-12-05 12:54:30.196017] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:22:47.668 [2024-12-05 12:54:30.196026] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:47.668 [2024-12-05 12:54:30.197726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:47.668 [2024-12-05 12:54:30.197755] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:47.668 BaseBdev3 00:22:47.668 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.668 12:54:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:47.668 12:54:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:47.668 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.668 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.668 BaseBdev4_malloc 00:22:47.668 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.668 12:54:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:22:47.668 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.668 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.668 true 00:22:47.668 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.668 12:54:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:22:47.668 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.668 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.668 [2024-12-05 12:54:30.235508] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:22:47.668 [2024-12-05 12:54:30.235543] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:47.668 [2024-12-05 12:54:30.235556] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:47.668 [2024-12-05 12:54:30.235564] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:47.668 [2024-12-05 12:54:30.237252] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:47.668 [2024-12-05 12:54:30.237370] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:47.668 BaseBdev4 00:22:47.668 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.668 12:54:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:22:47.668 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.668 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.668 [2024-12-05 12:54:30.243556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:47.668 [2024-12-05 12:54:30.245130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:47.668 [2024-12-05 12:54:30.245190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:47.668 [2024-12-05 12:54:30.245243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:47.668 [2024-12-05 12:54:30.245426] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:22:47.668 [2024-12-05 12:54:30.245435] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:47.668 [2024-12-05 12:54:30.245646] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:22:47.668 [2024-12-05 12:54:30.245770] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:22:47.668 [2024-12-05 12:54:30.245777] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:22:47.668 [2024-12-05 12:54:30.245888] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:47.668 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.668 12:54:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:47.668 12:54:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:47.668 12:54:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:47.668 12:54:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:47.668 12:54:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:47.668 12:54:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:47.668 12:54:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:47.668 12:54:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:47.668 12:54:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:47.668 12:54:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:47.926 12:54:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:47.926 12:54:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:47.926 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.926 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.926 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.926 12:54:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:47.926 "name": "raid_bdev1", 00:22:47.926 "uuid": "a7599fe9-c6cc-4d5b-840e-a43bf9c9f160", 00:22:47.926 "strip_size_kb": 0, 00:22:47.926 "state": "online", 00:22:47.926 "raid_level": "raid1", 00:22:47.926 "superblock": true, 00:22:47.926 "num_base_bdevs": 4, 00:22:47.926 "num_base_bdevs_discovered": 4, 00:22:47.926 "num_base_bdevs_operational": 4, 00:22:47.926 "base_bdevs_list": [ 00:22:47.926 { 00:22:47.926 "name": "BaseBdev1", 00:22:47.926 "uuid": "94d40e33-f71c-5928-bea6-8891afd89e68", 00:22:47.926 "is_configured": true, 00:22:47.926 "data_offset": 2048, 00:22:47.926 "data_size": 63488 00:22:47.926 }, 00:22:47.926 { 00:22:47.926 "name": "BaseBdev2", 00:22:47.926 "uuid": "445d0229-b737-5cc7-b518-b3adcfa1430d", 00:22:47.926 "is_configured": true, 00:22:47.926 "data_offset": 2048, 00:22:47.926 "data_size": 63488 00:22:47.926 }, 00:22:47.926 { 00:22:47.926 "name": "BaseBdev3", 00:22:47.926 "uuid": "d53928f7-43b9-5566-bfee-bad747499285", 00:22:47.926 "is_configured": true, 00:22:47.926 "data_offset": 2048, 00:22:47.926 "data_size": 63488 00:22:47.926 }, 00:22:47.926 { 00:22:47.926 "name": "BaseBdev4", 00:22:47.926 "uuid": "800f9c4a-a5b1-5899-b02c-f63fbb07c047", 00:22:47.926 "is_configured": true, 00:22:47.926 "data_offset": 2048, 00:22:47.926 "data_size": 63488 00:22:47.926 } 00:22:47.926 ] 00:22:47.926 }' 00:22:47.926 12:54:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:47.926 12:54:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:48.184 12:54:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:22:48.184 12:54:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:22:48.184 [2024-12-05 12:54:30.652436] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:22:49.150 12:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:22:49.150 12:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.150 12:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:49.150 12:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.150 12:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:22:49.150 12:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:22:49.150 12:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:22:49.150 12:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:22:49.150 12:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:49.150 12:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:49.150 12:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:49.150 12:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:49.150 12:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:49.150 12:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:49.150 12:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:49.150 12:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:49.150 12:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:49.150 12:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:49.150 12:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:49.150 12:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.150 12:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:49.150 12:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:49.150 12:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.150 12:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:49.150 "name": "raid_bdev1", 00:22:49.150 "uuid": "a7599fe9-c6cc-4d5b-840e-a43bf9c9f160", 00:22:49.150 "strip_size_kb": 0, 00:22:49.150 "state": "online", 00:22:49.150 "raid_level": "raid1", 00:22:49.150 "superblock": true, 00:22:49.150 "num_base_bdevs": 4, 00:22:49.150 "num_base_bdevs_discovered": 4, 00:22:49.150 "num_base_bdevs_operational": 4, 00:22:49.150 "base_bdevs_list": [ 00:22:49.150 { 00:22:49.150 "name": "BaseBdev1", 00:22:49.150 "uuid": "94d40e33-f71c-5928-bea6-8891afd89e68", 00:22:49.150 "is_configured": true, 00:22:49.150 "data_offset": 2048, 00:22:49.150 "data_size": 63488 00:22:49.150 }, 00:22:49.150 { 00:22:49.150 "name": "BaseBdev2", 00:22:49.150 "uuid": "445d0229-b737-5cc7-b518-b3adcfa1430d", 00:22:49.150 "is_configured": true, 00:22:49.150 "data_offset": 2048, 00:22:49.150 "data_size": 63488 00:22:49.150 }, 00:22:49.150 { 00:22:49.150 "name": "BaseBdev3", 00:22:49.150 "uuid": "d53928f7-43b9-5566-bfee-bad747499285", 00:22:49.150 "is_configured": true, 00:22:49.150 "data_offset": 2048, 00:22:49.150 "data_size": 63488 00:22:49.150 }, 00:22:49.150 { 00:22:49.150 "name": "BaseBdev4", 00:22:49.150 "uuid": "800f9c4a-a5b1-5899-b02c-f63fbb07c047", 00:22:49.150 "is_configured": true, 00:22:49.150 "data_offset": 2048, 00:22:49.150 "data_size": 63488 00:22:49.150 } 00:22:49.150 ] 00:22:49.150 }' 00:22:49.150 12:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:49.150 12:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:49.408 12:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:49.408 12:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.408 12:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:49.408 [2024-12-05 12:54:31.906693] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:49.408 [2024-12-05 12:54:31.906721] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:49.408 [2024-12-05 12:54:31.909222] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:49.408 [2024-12-05 12:54:31.909272] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:49.408 [2024-12-05 12:54:31.909377] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:49.408 [2024-12-05 12:54:31.909387] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:22:49.408 { 00:22:49.408 "results": [ 00:22:49.408 { 00:22:49.408 "job": "raid_bdev1", 00:22:49.408 "core_mask": "0x1", 00:22:49.408 "workload": "randrw", 00:22:49.408 "percentage": 50, 00:22:49.408 "status": "finished", 00:22:49.408 "queue_depth": 1, 00:22:49.408 "io_size": 131072, 00:22:49.408 "runtime": 1.252657, 00:22:49.408 "iops": 13361.199434482065, 00:22:49.408 "mibps": 1670.1499293102581, 00:22:49.408 "io_failed": 0, 00:22:49.408 "io_timeout": 0, 00:22:49.408 "avg_latency_us": 72.23453077244797, 00:22:49.408 "min_latency_us": 24.123076923076923, 00:22:49.408 "max_latency_us": 1342.2276923076922 00:22:49.408 } 00:22:49.408 ], 00:22:49.408 "core_count": 1 00:22:49.408 } 00:22:49.408 12:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.408 12:54:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72821 00:22:49.408 12:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72821 ']' 00:22:49.408 12:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72821 00:22:49.408 12:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:22:49.408 12:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:49.408 12:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72821 00:22:49.408 killing process with pid 72821 00:22:49.408 12:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:49.408 12:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:49.408 12:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72821' 00:22:49.408 12:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72821 00:22:49.408 [2024-12-05 12:54:31.939152] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:49.408 12:54:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72821 00:22:49.667 [2024-12-05 12:54:32.098568] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:50.234 12:54:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LK4QFg7riM 00:22:50.234 12:54:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:22:50.234 12:54:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:22:50.234 12:54:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:22:50.234 12:54:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:22:50.234 ************************************ 00:22:50.234 END TEST raid_read_error_test 00:22:50.234 ************************************ 00:22:50.234 12:54:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:50.234 12:54:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:22:50.234 12:54:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:22:50.234 00:22:50.234 real 0m3.563s 00:22:50.234 user 0m4.270s 00:22:50.234 sys 0m0.388s 00:22:50.234 12:54:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:50.234 12:54:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:50.234 12:54:32 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:22:50.234 12:54:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:50.234 12:54:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:50.234 12:54:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:50.234 ************************************ 00:22:50.234 START TEST raid_write_error_test 00:22:50.234 ************************************ 00:22:50.234 12:54:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:22:50.234 12:54:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:22:50.234 12:54:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:22:50.234 12:54:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:22:50.234 12:54:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:22:50.234 12:54:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:50.234 12:54:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:22:50.234 12:54:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:50.234 12:54:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:50.234 12:54:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:22:50.234 12:54:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:50.234 12:54:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:50.234 12:54:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:22:50.234 12:54:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:50.234 12:54:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:50.234 12:54:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:22:50.234 12:54:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:50.234 12:54:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:50.234 12:54:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:50.234 12:54:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:22:50.234 12:54:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:22:50.234 12:54:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:22:50.234 12:54:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:22:50.234 12:54:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:22:50.234 12:54:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:22:50.234 12:54:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:22:50.234 12:54:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:22:50.234 12:54:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:22:50.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.234 12:54:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.2LgAptHeZs 00:22:50.234 12:54:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72956 00:22:50.234 12:54:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72956 00:22:50.234 12:54:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 72956 ']' 00:22:50.234 12:54:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.234 12:54:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:50.234 12:54:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.234 12:54:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:50.234 12:54:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:50.234 12:54:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:50.491 [2024-12-05 12:54:32.825541] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:22:50.491 [2024-12-05 12:54:32.825812] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72956 ] 00:22:50.491 [2024-12-05 12:54:32.985809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.748 [2024-12-05 12:54:33.087236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.748 [2024-12-05 12:54:33.223861] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:50.748 [2024-12-05 12:54:33.223896] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.314 BaseBdev1_malloc 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.314 true 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.314 [2024-12-05 12:54:33.710353] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:51.314 [2024-12-05 12:54:33.710404] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:51.314 [2024-12-05 12:54:33.710422] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:22:51.314 [2024-12-05 12:54:33.710432] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:51.314 [2024-12-05 12:54:33.712569] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:51.314 [2024-12-05 12:54:33.712605] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:51.314 BaseBdev1 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.314 BaseBdev2_malloc 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.314 true 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.314 [2024-12-05 12:54:33.754405] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:51.314 [2024-12-05 12:54:33.754452] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:51.314 [2024-12-05 12:54:33.754467] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:22:51.314 [2024-12-05 12:54:33.754477] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:51.314 [2024-12-05 12:54:33.756570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:51.314 [2024-12-05 12:54:33.756603] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:51.314 BaseBdev2 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.314 BaseBdev3_malloc 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.314 true 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.314 [2024-12-05 12:54:33.807115] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:51.314 [2024-12-05 12:54:33.807166] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:51.314 [2024-12-05 12:54:33.807183] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:22:51.314 [2024-12-05 12:54:33.807193] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:51.314 [2024-12-05 12:54:33.809314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:51.314 [2024-12-05 12:54:33.809459] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:51.314 BaseBdev3 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.314 BaseBdev4_malloc 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.314 true 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.314 [2024-12-05 12:54:33.851066] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:22:51.314 [2024-12-05 12:54:33.851114] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:51.314 [2024-12-05 12:54:33.851132] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:51.314 [2024-12-05 12:54:33.851144] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:51.314 [2024-12-05 12:54:33.853412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:51.314 [2024-12-05 12:54:33.853450] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:51.314 BaseBdev4 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.314 [2024-12-05 12:54:33.859121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:51.314 [2024-12-05 12:54:33.861126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:51.314 [2024-12-05 12:54:33.861202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:51.314 [2024-12-05 12:54:33.861268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:51.314 [2024-12-05 12:54:33.861532] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:22:51.314 [2024-12-05 12:54:33.861546] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:51.314 [2024-12-05 12:54:33.861784] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:22:51.314 [2024-12-05 12:54:33.861942] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:22:51.314 [2024-12-05 12:54:33.861950] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:22:51.314 [2024-12-05 12:54:33.862089] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:51.314 12:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:51.315 12:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:51.315 12:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:51.315 12:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:51.315 12:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:51.315 12:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:51.315 12:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:51.315 12:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:51.315 12:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:51.315 12:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.315 12:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.315 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.315 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.315 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.572 12:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:51.572 "name": "raid_bdev1", 00:22:51.572 "uuid": "d6df1e54-3d45-42d6-8f0b-e601aa5d2880", 00:22:51.572 "strip_size_kb": 0, 00:22:51.572 "state": "online", 00:22:51.572 "raid_level": "raid1", 00:22:51.572 "superblock": true, 00:22:51.572 "num_base_bdevs": 4, 00:22:51.572 "num_base_bdevs_discovered": 4, 00:22:51.572 "num_base_bdevs_operational": 4, 00:22:51.572 "base_bdevs_list": [ 00:22:51.572 { 00:22:51.572 "name": "BaseBdev1", 00:22:51.572 "uuid": "7b1c3b7b-9a55-57bc-ac01-37a1f980022a", 00:22:51.572 "is_configured": true, 00:22:51.572 "data_offset": 2048, 00:22:51.572 "data_size": 63488 00:22:51.572 }, 00:22:51.572 { 00:22:51.572 "name": "BaseBdev2", 00:22:51.572 "uuid": "bd7fafc9-dbdd-5b35-9a33-ce69ccf22e1a", 00:22:51.572 "is_configured": true, 00:22:51.572 "data_offset": 2048, 00:22:51.572 "data_size": 63488 00:22:51.572 }, 00:22:51.572 { 00:22:51.572 "name": "BaseBdev3", 00:22:51.572 "uuid": "73eb9888-f0fd-58c6-9794-12749ccd86f6", 00:22:51.572 "is_configured": true, 00:22:51.573 "data_offset": 2048, 00:22:51.573 "data_size": 63488 00:22:51.573 }, 00:22:51.573 { 00:22:51.573 "name": "BaseBdev4", 00:22:51.573 "uuid": "ed0e1c5d-21e1-5b2d-bf69-ee30ab0f1957", 00:22:51.573 "is_configured": true, 00:22:51.573 "data_offset": 2048, 00:22:51.573 "data_size": 63488 00:22:51.573 } 00:22:51.573 ] 00:22:51.573 }' 00:22:51.573 12:54:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:51.573 12:54:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.830 12:54:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:22:51.830 12:54:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:22:51.830 [2024-12-05 12:54:34.240204] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:22:52.761 12:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:22:52.761 12:54:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.761 12:54:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:52.761 [2024-12-05 12:54:35.166637] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:22:52.761 [2024-12-05 12:54:35.166798] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:52.761 [2024-12-05 12:54:35.167030] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:22:52.761 12:54:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.761 12:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:22:52.761 12:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:22:52.761 12:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:22:52.761 12:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:22:52.761 12:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:52.761 12:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:52.761 12:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:52.762 12:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:52.762 12:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:52.762 12:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:52.762 12:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:52.762 12:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:52.762 12:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:52.762 12:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:52.762 12:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:52.762 12:54:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.762 12:54:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:52.762 12:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.762 12:54:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.762 12:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:52.762 "name": "raid_bdev1", 00:22:52.762 "uuid": "d6df1e54-3d45-42d6-8f0b-e601aa5d2880", 00:22:52.762 "strip_size_kb": 0, 00:22:52.762 "state": "online", 00:22:52.762 "raid_level": "raid1", 00:22:52.762 "superblock": true, 00:22:52.762 "num_base_bdevs": 4, 00:22:52.762 "num_base_bdevs_discovered": 3, 00:22:52.762 "num_base_bdevs_operational": 3, 00:22:52.762 "base_bdevs_list": [ 00:22:52.762 { 00:22:52.762 "name": null, 00:22:52.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:52.762 "is_configured": false, 00:22:52.762 "data_offset": 0, 00:22:52.762 "data_size": 63488 00:22:52.762 }, 00:22:52.762 { 00:22:52.762 "name": "BaseBdev2", 00:22:52.762 "uuid": "bd7fafc9-dbdd-5b35-9a33-ce69ccf22e1a", 00:22:52.762 "is_configured": true, 00:22:52.762 "data_offset": 2048, 00:22:52.762 "data_size": 63488 00:22:52.762 }, 00:22:52.762 { 00:22:52.762 "name": "BaseBdev3", 00:22:52.762 "uuid": "73eb9888-f0fd-58c6-9794-12749ccd86f6", 00:22:52.762 "is_configured": true, 00:22:52.762 "data_offset": 2048, 00:22:52.762 "data_size": 63488 00:22:52.762 }, 00:22:52.762 { 00:22:52.762 "name": "BaseBdev4", 00:22:52.762 "uuid": "ed0e1c5d-21e1-5b2d-bf69-ee30ab0f1957", 00:22:52.762 "is_configured": true, 00:22:52.762 "data_offset": 2048, 00:22:52.762 "data_size": 63488 00:22:52.762 } 00:22:52.762 ] 00:22:52.762 }' 00:22:52.762 12:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:52.762 12:54:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.019 12:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:53.019 12:54:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.019 12:54:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.019 [2024-12-05 12:54:35.490812] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:53.019 [2024-12-05 12:54:35.490838] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:53.019 [2024-12-05 12:54:35.493838] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:53.019 [2024-12-05 12:54:35.493977] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:53.019 [2024-12-05 12:54:35.494095] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:53.019 [2024-12-05 12:54:35.494108] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:22:53.019 { 00:22:53.019 "results": [ 00:22:53.019 { 00:22:53.019 "job": "raid_bdev1", 00:22:53.019 "core_mask": "0x1", 00:22:53.019 "workload": "randrw", 00:22:53.019 "percentage": 50, 00:22:53.019 "status": "finished", 00:22:53.019 "queue_depth": 1, 00:22:53.019 "io_size": 131072, 00:22:53.019 "runtime": 1.248597, 00:22:53.019 "iops": 11959.823706127758, 00:22:53.019 "mibps": 1494.9779632659697, 00:22:53.019 "io_failed": 0, 00:22:53.019 "io_timeout": 0, 00:22:53.019 "avg_latency_us": 80.34763523224248, 00:22:53.020 "min_latency_us": 30.72, 00:22:53.020 "max_latency_us": 1739.2246153846154 00:22:53.020 } 00:22:53.020 ], 00:22:53.020 "core_count": 1 00:22:53.020 } 00:22:53.020 12:54:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.020 12:54:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72956 00:22:53.020 12:54:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 72956 ']' 00:22:53.020 12:54:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 72956 00:22:53.020 12:54:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:22:53.020 12:54:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:53.020 12:54:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72956 00:22:53.020 killing process with pid 72956 00:22:53.020 12:54:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:53.020 12:54:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:53.020 12:54:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72956' 00:22:53.020 12:54:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 72956 00:22:53.020 [2024-12-05 12:54:35.522459] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:53.020 12:54:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 72956 00:22:53.276 [2024-12-05 12:54:35.726021] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:54.208 12:54:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:22:54.208 12:54:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.2LgAptHeZs 00:22:54.208 12:54:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:22:54.208 12:54:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:22:54.208 12:54:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:22:54.208 ************************************ 00:22:54.208 END TEST raid_write_error_test 00:22:54.208 ************************************ 00:22:54.208 12:54:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:54.208 12:54:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:22:54.208 12:54:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:22:54.208 00:22:54.208 real 0m3.689s 00:22:54.208 user 0m4.339s 00:22:54.208 sys 0m0.410s 00:22:54.208 12:54:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:54.208 12:54:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:54.208 12:54:36 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:22:54.208 12:54:36 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:22:54.208 12:54:36 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:22:54.208 12:54:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:22:54.208 12:54:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:54.208 12:54:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:54.208 ************************************ 00:22:54.208 START TEST raid_rebuild_test 00:22:54.208 ************************************ 00:22:54.208 12:54:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:22:54.208 12:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:22:54.208 12:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:22:54.208 12:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:22:54.208 12:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:54.208 12:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:22:54.208 12:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:54.208 12:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:54.208 12:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:54.208 12:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:54.208 12:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:54.208 12:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:54.208 12:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:54.208 12:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:54.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.208 12:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:54.208 12:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:54.208 12:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:54.208 12:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:54.208 12:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:54.208 12:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:54.208 12:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:54.208 12:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:22:54.208 12:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:22:54.208 12:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:22:54.208 12:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=73088 00:22:54.208 12:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 73088 00:22:54.208 12:54:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 73088 ']' 00:22:54.208 12:54:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.208 12:54:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:54.208 12:54:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.208 12:54:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:54.208 12:54:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:54.208 12:54:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:54.208 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:54.208 Zero copy mechanism will not be used. 00:22:54.208 [2024-12-05 12:54:36.551539] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:22:54.208 [2024-12-05 12:54:36.551656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73088 ] 00:22:54.208 [2024-12-05 12:54:36.705863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.465 [2024-12-05 12:54:36.791565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.465 [2024-12-05 12:54:36.901070] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:54.465 [2024-12-05 12:54:36.901103] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.032 BaseBdev1_malloc 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.032 [2024-12-05 12:54:37.448913] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:55.032 [2024-12-05 12:54:37.448970] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:55.032 [2024-12-05 12:54:37.448988] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:55.032 [2024-12-05 12:54:37.448998] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:55.032 [2024-12-05 12:54:37.450761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:55.032 [2024-12-05 12:54:37.450896] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:55.032 BaseBdev1 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.032 BaseBdev2_malloc 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.032 [2024-12-05 12:54:37.480427] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:55.032 [2024-12-05 12:54:37.480472] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:55.032 [2024-12-05 12:54:37.480508] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:55.032 [2024-12-05 12:54:37.480517] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:55.032 [2024-12-05 12:54:37.482204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:55.032 [2024-12-05 12:54:37.482334] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:55.032 BaseBdev2 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.032 spare_malloc 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.032 spare_delay 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.032 [2024-12-05 12:54:37.533833] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:55.032 [2024-12-05 12:54:37.533879] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:55.032 [2024-12-05 12:54:37.533894] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:55.032 [2024-12-05 12:54:37.533903] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:55.032 [2024-12-05 12:54:37.535648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:55.032 [2024-12-05 12:54:37.535774] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:55.032 spare 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.032 [2024-12-05 12:54:37.541879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:55.032 [2024-12-05 12:54:37.543374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:55.032 [2024-12-05 12:54:37.543538] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:55.032 [2024-12-05 12:54:37.543554] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:55.032 [2024-12-05 12:54:37.543750] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:55.032 [2024-12-05 12:54:37.543882] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:55.032 [2024-12-05 12:54:37.543892] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:55.032 [2024-12-05 12:54:37.544004] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.032 12:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:55.032 "name": "raid_bdev1", 00:22:55.032 "uuid": "0c9dceb1-3fc1-4a8b-8a3c-b83183669b1e", 00:22:55.032 "strip_size_kb": 0, 00:22:55.032 "state": "online", 00:22:55.032 "raid_level": "raid1", 00:22:55.032 "superblock": false, 00:22:55.032 "num_base_bdevs": 2, 00:22:55.032 "num_base_bdevs_discovered": 2, 00:22:55.032 "num_base_bdevs_operational": 2, 00:22:55.033 "base_bdevs_list": [ 00:22:55.033 { 00:22:55.033 "name": "BaseBdev1", 00:22:55.033 "uuid": "308b0bff-735b-51fa-94c9-690a39e03218", 00:22:55.033 "is_configured": true, 00:22:55.033 "data_offset": 0, 00:22:55.033 "data_size": 65536 00:22:55.033 }, 00:22:55.033 { 00:22:55.033 "name": "BaseBdev2", 00:22:55.033 "uuid": "85478223-832b-5fd9-8905-7271b843c47b", 00:22:55.033 "is_configured": true, 00:22:55.033 "data_offset": 0, 00:22:55.033 "data_size": 65536 00:22:55.033 } 00:22:55.033 ] 00:22:55.033 }' 00:22:55.033 12:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:55.033 12:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.290 12:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:55.290 12:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:55.290 12:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.290 12:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.290 [2024-12-05 12:54:37.858166] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:55.290 12:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.549 12:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:22:55.549 12:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:55.549 12:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.549 12:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.549 12:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:55.549 12:54:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.549 12:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:22:55.549 12:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:55.549 12:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:22:55.549 12:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:22:55.549 12:54:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:22:55.549 12:54:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:55.549 12:54:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:55.549 12:54:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:55.549 12:54:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:55.549 12:54:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:55.549 12:54:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:22:55.549 12:54:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:55.549 12:54:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:55.549 12:54:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:55.549 [2024-12-05 12:54:38.102022] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:55.549 /dev/nbd0 00:22:55.808 12:54:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:55.808 12:54:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:55.808 12:54:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:55.808 12:54:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:22:55.808 12:54:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:55.808 12:54:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:55.808 12:54:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:55.808 12:54:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:22:55.808 12:54:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:55.808 12:54:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:55.808 12:54:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:55.808 1+0 records in 00:22:55.808 1+0 records out 00:22:55.808 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236051 s, 17.4 MB/s 00:22:55.808 12:54:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:55.808 12:54:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:22:55.808 12:54:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:55.808 12:54:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:55.808 12:54:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:22:55.808 12:54:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:55.808 12:54:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:55.808 12:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:22:55.808 12:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:22:55.808 12:54:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:22:59.989 65536+0 records in 00:22:59.989 65536+0 records out 00:22:59.989 33554432 bytes (34 MB, 32 MiB) copied, 3.90759 s, 8.6 MB/s 00:22:59.989 12:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:22:59.989 12:54:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:59.989 12:54:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:59.989 12:54:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:59.989 12:54:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:22:59.989 12:54:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:59.989 12:54:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:59.989 12:54:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:59.989 [2024-12-05 12:54:42.271649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:59.989 12:54:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:59.989 12:54:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:59.989 12:54:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:59.989 12:54:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:59.989 12:54:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:59.989 12:54:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:59.989 12:54:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:59.989 12:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:59.989 12:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.989 12:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.989 [2024-12-05 12:54:42.286500] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:59.989 12:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.989 12:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:59.989 12:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:59.989 12:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:59.989 12:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:59.989 12:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:59.989 12:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:59.989 12:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:59.989 12:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:59.989 12:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:59.989 12:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:59.989 12:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.989 12:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.989 12:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.989 12:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.989 12:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.989 12:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:59.989 "name": "raid_bdev1", 00:22:59.989 "uuid": "0c9dceb1-3fc1-4a8b-8a3c-b83183669b1e", 00:22:59.989 "strip_size_kb": 0, 00:22:59.989 "state": "online", 00:22:59.989 "raid_level": "raid1", 00:22:59.989 "superblock": false, 00:22:59.989 "num_base_bdevs": 2, 00:22:59.989 "num_base_bdevs_discovered": 1, 00:22:59.989 "num_base_bdevs_operational": 1, 00:22:59.989 "base_bdevs_list": [ 00:22:59.989 { 00:22:59.989 "name": null, 00:22:59.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.989 "is_configured": false, 00:22:59.989 "data_offset": 0, 00:22:59.989 "data_size": 65536 00:22:59.989 }, 00:22:59.989 { 00:22:59.989 "name": "BaseBdev2", 00:22:59.989 "uuid": "85478223-832b-5fd9-8905-7271b843c47b", 00:22:59.989 "is_configured": true, 00:22:59.989 "data_offset": 0, 00:22:59.989 "data_size": 65536 00:22:59.989 } 00:22:59.989 ] 00:22:59.989 }' 00:22:59.989 12:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:59.989 12:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.247 12:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:00.247 12:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.247 12:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.247 [2024-12-05 12:54:42.606571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:00.247 [2024-12-05 12:54:42.615962] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:23:00.247 12:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.247 12:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:23:00.247 [2024-12-05 12:54:42.617535] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:01.179 12:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:01.179 12:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:01.179 12:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:01.179 12:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:01.179 12:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:01.179 12:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.179 12:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:01.179 12:54:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.179 12:54:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.179 12:54:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.179 12:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:01.179 "name": "raid_bdev1", 00:23:01.179 "uuid": "0c9dceb1-3fc1-4a8b-8a3c-b83183669b1e", 00:23:01.179 "strip_size_kb": 0, 00:23:01.179 "state": "online", 00:23:01.179 "raid_level": "raid1", 00:23:01.179 "superblock": false, 00:23:01.179 "num_base_bdevs": 2, 00:23:01.179 "num_base_bdevs_discovered": 2, 00:23:01.179 "num_base_bdevs_operational": 2, 00:23:01.179 "process": { 00:23:01.179 "type": "rebuild", 00:23:01.179 "target": "spare", 00:23:01.179 "progress": { 00:23:01.179 "blocks": 20480, 00:23:01.179 "percent": 31 00:23:01.179 } 00:23:01.179 }, 00:23:01.179 "base_bdevs_list": [ 00:23:01.179 { 00:23:01.179 "name": "spare", 00:23:01.179 "uuid": "af90cf26-3ff4-5250-9955-ee5750b1e093", 00:23:01.179 "is_configured": true, 00:23:01.179 "data_offset": 0, 00:23:01.179 "data_size": 65536 00:23:01.179 }, 00:23:01.179 { 00:23:01.179 "name": "BaseBdev2", 00:23:01.179 "uuid": "85478223-832b-5fd9-8905-7271b843c47b", 00:23:01.179 "is_configured": true, 00:23:01.179 "data_offset": 0, 00:23:01.179 "data_size": 65536 00:23:01.179 } 00:23:01.179 ] 00:23:01.179 }' 00:23:01.179 12:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:01.179 12:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:01.179 12:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:01.179 12:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:01.179 12:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:01.179 12:54:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.179 12:54:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.179 [2024-12-05 12:54:43.739776] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:01.438 [2024-12-05 12:54:43.823034] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:01.438 [2024-12-05 12:54:43.823100] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:01.438 [2024-12-05 12:54:43.823112] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:01.438 [2024-12-05 12:54:43.823120] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:01.438 12:54:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.438 12:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:01.438 12:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:01.438 12:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:01.438 12:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:01.438 12:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:01.438 12:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:01.438 12:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:01.438 12:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:01.438 12:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:01.438 12:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:01.438 12:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:01.438 12:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.438 12:54:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.438 12:54:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.438 12:54:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.438 12:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:01.438 "name": "raid_bdev1", 00:23:01.438 "uuid": "0c9dceb1-3fc1-4a8b-8a3c-b83183669b1e", 00:23:01.438 "strip_size_kb": 0, 00:23:01.438 "state": "online", 00:23:01.438 "raid_level": "raid1", 00:23:01.438 "superblock": false, 00:23:01.438 "num_base_bdevs": 2, 00:23:01.438 "num_base_bdevs_discovered": 1, 00:23:01.438 "num_base_bdevs_operational": 1, 00:23:01.438 "base_bdevs_list": [ 00:23:01.438 { 00:23:01.438 "name": null, 00:23:01.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.438 "is_configured": false, 00:23:01.438 "data_offset": 0, 00:23:01.438 "data_size": 65536 00:23:01.438 }, 00:23:01.438 { 00:23:01.438 "name": "BaseBdev2", 00:23:01.438 "uuid": "85478223-832b-5fd9-8905-7271b843c47b", 00:23:01.438 "is_configured": true, 00:23:01.438 "data_offset": 0, 00:23:01.438 "data_size": 65536 00:23:01.438 } 00:23:01.438 ] 00:23:01.438 }' 00:23:01.438 12:54:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:01.438 12:54:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.696 12:54:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:01.696 12:54:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:01.696 12:54:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:01.696 12:54:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:01.696 12:54:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:01.696 12:54:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:01.696 12:54:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.696 12:54:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.696 12:54:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.696 12:54:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.696 12:54:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:01.696 "name": "raid_bdev1", 00:23:01.696 "uuid": "0c9dceb1-3fc1-4a8b-8a3c-b83183669b1e", 00:23:01.696 "strip_size_kb": 0, 00:23:01.696 "state": "online", 00:23:01.696 "raid_level": "raid1", 00:23:01.696 "superblock": false, 00:23:01.696 "num_base_bdevs": 2, 00:23:01.696 "num_base_bdevs_discovered": 1, 00:23:01.696 "num_base_bdevs_operational": 1, 00:23:01.696 "base_bdevs_list": [ 00:23:01.696 { 00:23:01.696 "name": null, 00:23:01.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.696 "is_configured": false, 00:23:01.696 "data_offset": 0, 00:23:01.696 "data_size": 65536 00:23:01.696 }, 00:23:01.696 { 00:23:01.696 "name": "BaseBdev2", 00:23:01.696 "uuid": "85478223-832b-5fd9-8905-7271b843c47b", 00:23:01.696 "is_configured": true, 00:23:01.696 "data_offset": 0, 00:23:01.696 "data_size": 65536 00:23:01.696 } 00:23:01.696 ] 00:23:01.696 }' 00:23:01.696 12:54:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:01.696 12:54:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:01.696 12:54:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:01.697 12:54:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:01.697 12:54:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:01.697 12:54:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.697 12:54:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.697 [2024-12-05 12:54:44.245640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:01.697 [2024-12-05 12:54:44.254621] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:23:01.697 12:54:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.697 12:54:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:23:01.697 [2024-12-05 12:54:44.256184] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:03.070 12:54:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:03.070 12:54:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:03.070 12:54:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:03.070 12:54:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:03.070 12:54:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:03.070 12:54:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.070 12:54:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:03.070 12:54:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.070 12:54:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.070 12:54:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.070 12:54:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:03.070 "name": "raid_bdev1", 00:23:03.070 "uuid": "0c9dceb1-3fc1-4a8b-8a3c-b83183669b1e", 00:23:03.070 "strip_size_kb": 0, 00:23:03.070 "state": "online", 00:23:03.070 "raid_level": "raid1", 00:23:03.070 "superblock": false, 00:23:03.070 "num_base_bdevs": 2, 00:23:03.070 "num_base_bdevs_discovered": 2, 00:23:03.070 "num_base_bdevs_operational": 2, 00:23:03.070 "process": { 00:23:03.070 "type": "rebuild", 00:23:03.070 "target": "spare", 00:23:03.070 "progress": { 00:23:03.070 "blocks": 20480, 00:23:03.070 "percent": 31 00:23:03.070 } 00:23:03.070 }, 00:23:03.070 "base_bdevs_list": [ 00:23:03.070 { 00:23:03.070 "name": "spare", 00:23:03.070 "uuid": "af90cf26-3ff4-5250-9955-ee5750b1e093", 00:23:03.070 "is_configured": true, 00:23:03.070 "data_offset": 0, 00:23:03.070 "data_size": 65536 00:23:03.070 }, 00:23:03.070 { 00:23:03.070 "name": "BaseBdev2", 00:23:03.070 "uuid": "85478223-832b-5fd9-8905-7271b843c47b", 00:23:03.070 "is_configured": true, 00:23:03.070 "data_offset": 0, 00:23:03.070 "data_size": 65536 00:23:03.070 } 00:23:03.070 ] 00:23:03.070 }' 00:23:03.070 12:54:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:03.070 12:54:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:03.070 12:54:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:03.070 12:54:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:03.070 12:54:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:23:03.070 12:54:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:23:03.070 12:54:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:23:03.070 12:54:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:23:03.070 12:54:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=278 00:23:03.070 12:54:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:03.070 12:54:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:03.070 12:54:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:03.070 12:54:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:03.070 12:54:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:03.070 12:54:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:03.070 12:54:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:03.070 12:54:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.070 12:54:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.070 12:54:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.070 12:54:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.070 12:54:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:03.070 "name": "raid_bdev1", 00:23:03.070 "uuid": "0c9dceb1-3fc1-4a8b-8a3c-b83183669b1e", 00:23:03.070 "strip_size_kb": 0, 00:23:03.070 "state": "online", 00:23:03.070 "raid_level": "raid1", 00:23:03.070 "superblock": false, 00:23:03.070 "num_base_bdevs": 2, 00:23:03.070 "num_base_bdevs_discovered": 2, 00:23:03.070 "num_base_bdevs_operational": 2, 00:23:03.070 "process": { 00:23:03.070 "type": "rebuild", 00:23:03.070 "target": "spare", 00:23:03.070 "progress": { 00:23:03.070 "blocks": 22528, 00:23:03.070 "percent": 34 00:23:03.070 } 00:23:03.070 }, 00:23:03.070 "base_bdevs_list": [ 00:23:03.070 { 00:23:03.070 "name": "spare", 00:23:03.070 "uuid": "af90cf26-3ff4-5250-9955-ee5750b1e093", 00:23:03.070 "is_configured": true, 00:23:03.070 "data_offset": 0, 00:23:03.070 "data_size": 65536 00:23:03.070 }, 00:23:03.070 { 00:23:03.070 "name": "BaseBdev2", 00:23:03.070 "uuid": "85478223-832b-5fd9-8905-7271b843c47b", 00:23:03.070 "is_configured": true, 00:23:03.070 "data_offset": 0, 00:23:03.070 "data_size": 65536 00:23:03.070 } 00:23:03.070 ] 00:23:03.070 }' 00:23:03.070 12:54:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:03.070 12:54:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:03.070 12:54:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:03.070 12:54:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:03.070 12:54:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:04.046 12:54:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:04.046 12:54:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:04.046 12:54:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:04.046 12:54:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:04.046 12:54:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:04.046 12:54:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:04.046 12:54:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.046 12:54:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.046 12:54:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.046 12:54:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.046 12:54:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.046 12:54:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:04.046 "name": "raid_bdev1", 00:23:04.046 "uuid": "0c9dceb1-3fc1-4a8b-8a3c-b83183669b1e", 00:23:04.046 "strip_size_kb": 0, 00:23:04.046 "state": "online", 00:23:04.046 "raid_level": "raid1", 00:23:04.046 "superblock": false, 00:23:04.046 "num_base_bdevs": 2, 00:23:04.046 "num_base_bdevs_discovered": 2, 00:23:04.046 "num_base_bdevs_operational": 2, 00:23:04.046 "process": { 00:23:04.046 "type": "rebuild", 00:23:04.046 "target": "spare", 00:23:04.046 "progress": { 00:23:04.046 "blocks": 45056, 00:23:04.046 "percent": 68 00:23:04.046 } 00:23:04.046 }, 00:23:04.046 "base_bdevs_list": [ 00:23:04.046 { 00:23:04.046 "name": "spare", 00:23:04.046 "uuid": "af90cf26-3ff4-5250-9955-ee5750b1e093", 00:23:04.046 "is_configured": true, 00:23:04.046 "data_offset": 0, 00:23:04.046 "data_size": 65536 00:23:04.046 }, 00:23:04.046 { 00:23:04.046 "name": "BaseBdev2", 00:23:04.046 "uuid": "85478223-832b-5fd9-8905-7271b843c47b", 00:23:04.046 "is_configured": true, 00:23:04.046 "data_offset": 0, 00:23:04.046 "data_size": 65536 00:23:04.046 } 00:23:04.046 ] 00:23:04.046 }' 00:23:04.046 12:54:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:04.046 12:54:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:04.046 12:54:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:04.046 12:54:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:04.046 12:54:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:04.979 [2024-12-05 12:54:47.470056] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:04.979 [2024-12-05 12:54:47.470123] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:04.979 [2024-12-05 12:54:47.470163] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:04.979 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:04.979 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:04.979 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:04.979 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:04.979 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:05.237 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:05.237 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:05.237 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:05.237 12:54:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.237 12:54:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:05.237 12:54:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.237 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:05.237 "name": "raid_bdev1", 00:23:05.237 "uuid": "0c9dceb1-3fc1-4a8b-8a3c-b83183669b1e", 00:23:05.237 "strip_size_kb": 0, 00:23:05.237 "state": "online", 00:23:05.237 "raid_level": "raid1", 00:23:05.237 "superblock": false, 00:23:05.237 "num_base_bdevs": 2, 00:23:05.237 "num_base_bdevs_discovered": 2, 00:23:05.237 "num_base_bdevs_operational": 2, 00:23:05.237 "base_bdevs_list": [ 00:23:05.237 { 00:23:05.237 "name": "spare", 00:23:05.237 "uuid": "af90cf26-3ff4-5250-9955-ee5750b1e093", 00:23:05.237 "is_configured": true, 00:23:05.237 "data_offset": 0, 00:23:05.237 "data_size": 65536 00:23:05.237 }, 00:23:05.237 { 00:23:05.237 "name": "BaseBdev2", 00:23:05.237 "uuid": "85478223-832b-5fd9-8905-7271b843c47b", 00:23:05.237 "is_configured": true, 00:23:05.237 "data_offset": 0, 00:23:05.237 "data_size": 65536 00:23:05.237 } 00:23:05.237 ] 00:23:05.237 }' 00:23:05.237 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:05.237 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:05.237 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:05.237 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:23:05.237 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:23:05.237 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:05.237 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:05.237 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:05.237 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:05.237 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:05.237 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:05.237 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:05.237 12:54:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.237 12:54:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:05.237 12:54:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.237 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:05.237 "name": "raid_bdev1", 00:23:05.237 "uuid": "0c9dceb1-3fc1-4a8b-8a3c-b83183669b1e", 00:23:05.237 "strip_size_kb": 0, 00:23:05.237 "state": "online", 00:23:05.237 "raid_level": "raid1", 00:23:05.237 "superblock": false, 00:23:05.237 "num_base_bdevs": 2, 00:23:05.237 "num_base_bdevs_discovered": 2, 00:23:05.237 "num_base_bdevs_operational": 2, 00:23:05.237 "base_bdevs_list": [ 00:23:05.237 { 00:23:05.237 "name": "spare", 00:23:05.237 "uuid": "af90cf26-3ff4-5250-9955-ee5750b1e093", 00:23:05.237 "is_configured": true, 00:23:05.237 "data_offset": 0, 00:23:05.237 "data_size": 65536 00:23:05.237 }, 00:23:05.237 { 00:23:05.237 "name": "BaseBdev2", 00:23:05.237 "uuid": "85478223-832b-5fd9-8905-7271b843c47b", 00:23:05.237 "is_configured": true, 00:23:05.237 "data_offset": 0, 00:23:05.237 "data_size": 65536 00:23:05.237 } 00:23:05.237 ] 00:23:05.237 }' 00:23:05.237 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:05.237 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:05.238 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:05.238 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:05.238 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:05.238 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:05.238 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:05.238 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:05.238 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:05.238 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:05.238 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:05.238 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:05.238 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:05.238 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:05.238 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:05.238 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:05.238 12:54:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.238 12:54:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:05.238 12:54:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.238 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:05.238 "name": "raid_bdev1", 00:23:05.238 "uuid": "0c9dceb1-3fc1-4a8b-8a3c-b83183669b1e", 00:23:05.238 "strip_size_kb": 0, 00:23:05.238 "state": "online", 00:23:05.238 "raid_level": "raid1", 00:23:05.238 "superblock": false, 00:23:05.238 "num_base_bdevs": 2, 00:23:05.238 "num_base_bdevs_discovered": 2, 00:23:05.238 "num_base_bdevs_operational": 2, 00:23:05.238 "base_bdevs_list": [ 00:23:05.238 { 00:23:05.238 "name": "spare", 00:23:05.238 "uuid": "af90cf26-3ff4-5250-9955-ee5750b1e093", 00:23:05.238 "is_configured": true, 00:23:05.238 "data_offset": 0, 00:23:05.238 "data_size": 65536 00:23:05.238 }, 00:23:05.238 { 00:23:05.238 "name": "BaseBdev2", 00:23:05.238 "uuid": "85478223-832b-5fd9-8905-7271b843c47b", 00:23:05.238 "is_configured": true, 00:23:05.238 "data_offset": 0, 00:23:05.238 "data_size": 65536 00:23:05.238 } 00:23:05.238 ] 00:23:05.238 }' 00:23:05.238 12:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:05.238 12:54:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:05.496 12:54:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:05.496 12:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.496 12:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:05.496 [2024-12-05 12:54:48.068817] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:05.496 [2024-12-05 12:54:48.068845] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:05.496 [2024-12-05 12:54:48.068907] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:05.496 [2024-12-05 12:54:48.068962] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:05.496 [2024-12-05 12:54:48.068970] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:05.496 12:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.496 12:54:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:05.496 12:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.496 12:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:05.496 12:54:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:23:05.752 12:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.752 12:54:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:23:05.752 12:54:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:23:05.752 12:54:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:23:05.752 12:54:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:05.752 12:54:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:05.752 12:54:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:05.752 12:54:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:05.752 12:54:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:05.752 12:54:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:05.752 12:54:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:23:05.752 12:54:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:05.752 12:54:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:05.752 12:54:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:05.752 /dev/nbd0 00:23:05.752 12:54:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:05.753 12:54:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:05.753 12:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:05.753 12:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:23:05.753 12:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:05.753 12:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:05.753 12:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:05.753 12:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:23:05.753 12:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:05.753 12:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:05.753 12:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:05.753 1+0 records in 00:23:05.753 1+0 records out 00:23:05.753 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204289 s, 20.1 MB/s 00:23:06.009 12:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:06.009 12:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:23:06.009 12:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:06.009 12:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:06.009 12:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:23:06.009 12:54:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:06.009 12:54:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:06.009 12:54:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:23:06.009 /dev/nbd1 00:23:06.009 12:54:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:06.009 12:54:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:06.009 12:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:23:06.009 12:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:23:06.009 12:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:06.009 12:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:06.009 12:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:23:06.009 12:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:23:06.009 12:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:06.009 12:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:06.009 12:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:06.009 1+0 records in 00:23:06.009 1+0 records out 00:23:06.009 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271952 s, 15.1 MB/s 00:23:06.009 12:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:06.009 12:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:23:06.009 12:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:06.009 12:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:06.009 12:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:23:06.009 12:54:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:06.009 12:54:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:06.009 12:54:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:06.266 12:54:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:23:06.266 12:54:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:06.266 12:54:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:06.266 12:54:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:06.266 12:54:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:23:06.266 12:54:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:06.266 12:54:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:06.523 12:54:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:06.523 12:54:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:06.523 12:54:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:06.523 12:54:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:06.523 12:54:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:06.523 12:54:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:06.523 12:54:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:06.523 12:54:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:06.523 12:54:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:06.523 12:54:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:23:06.523 12:54:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:06.523 12:54:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:06.523 12:54:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:06.523 12:54:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:06.523 12:54:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:06.523 12:54:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:06.523 12:54:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:06.523 12:54:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:06.523 12:54:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:23:06.523 12:54:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 73088 00:23:06.523 12:54:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 73088 ']' 00:23:06.523 12:54:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 73088 00:23:06.523 12:54:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:23:06.523 12:54:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:06.523 12:54:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73088 00:23:06.780 12:54:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:06.780 killing process with pid 73088 00:23:06.780 12:54:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:06.780 12:54:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73088' 00:23:06.780 Received shutdown signal, test time was about 60.000000 seconds 00:23:06.780 00:23:06.780 Latency(us) 00:23:06.780 [2024-12-05T12:54:49.367Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.780 [2024-12-05T12:54:49.367Z] =================================================================================================================== 00:23:06.780 [2024-12-05T12:54:49.367Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:06.780 12:54:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 73088 00:23:06.780 12:54:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 73088 00:23:06.780 [2024-12-05 12:54:49.108348] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:06.780 [2024-12-05 12:54:49.252078] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:07.346 12:54:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:23:07.346 00:23:07.346 real 0m13.339s 00:23:07.346 user 0m15.083s 00:23:07.346 sys 0m2.385s 00:23:07.346 12:54:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:07.346 12:54:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.346 ************************************ 00:23:07.346 END TEST raid_rebuild_test 00:23:07.346 ************************************ 00:23:07.346 12:54:49 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:23:07.346 12:54:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:23:07.346 12:54:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:07.346 12:54:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:07.346 ************************************ 00:23:07.346 START TEST raid_rebuild_test_sb 00:23:07.346 ************************************ 00:23:07.346 12:54:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:23:07.346 12:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:23:07.346 12:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:23:07.346 12:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:23:07.346 12:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:23:07.346 12:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:23:07.346 12:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:23:07.346 12:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:07.346 12:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:23:07.346 12:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:07.346 12:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:07.346 12:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:23:07.346 12:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:07.346 12:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:07.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.346 12:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:07.346 12:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:23:07.346 12:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:23:07.346 12:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:23:07.346 12:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:23:07.346 12:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:23:07.346 12:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:23:07.346 12:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:23:07.346 12:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:23:07.346 12:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:23:07.346 12:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:23:07.346 12:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=73483 00:23:07.346 12:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 73483 00:23:07.346 12:54:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73483 ']' 00:23:07.346 12:54:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:07.346 12:54:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.346 12:54:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:07.346 12:54:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.346 12:54:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:07.346 12:54:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:07.604 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:07.604 Zero copy mechanism will not be used. 00:23:07.604 [2024-12-05 12:54:49.931468] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:23:07.604 [2024-12-05 12:54:49.931602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73483 ] 00:23:07.604 [2024-12-05 12:54:50.087292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.862 [2024-12-05 12:54:50.189520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:07.862 [2024-12-05 12:54:50.326978] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:07.862 [2024-12-05 12:54:50.327026] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:08.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:08.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:23:08.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:08.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:08.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.428 BaseBdev1_malloc 00:23:08.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:08.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.428 [2024-12-05 12:54:50.802217] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:08.428 [2024-12-05 12:54:50.802274] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:08.428 [2024-12-05 12:54:50.802294] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:08.428 [2024-12-05 12:54:50.802305] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:08.428 [2024-12-05 12:54:50.804408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:08.428 [2024-12-05 12:54:50.804447] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:08.428 BaseBdev1 00:23:08.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:08.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:08.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.428 BaseBdev2_malloc 00:23:08.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:08.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.428 [2024-12-05 12:54:50.838076] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:08.428 [2024-12-05 12:54:50.838127] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:08.428 [2024-12-05 12:54:50.838145] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:08.428 [2024-12-05 12:54:50.838155] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:08.428 [2024-12-05 12:54:50.840224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:08.428 [2024-12-05 12:54:50.840258] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:08.428 BaseBdev2 00:23:08.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:23:08.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.428 spare_malloc 00:23:08.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:08.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.428 spare_delay 00:23:08.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:08.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.428 [2024-12-05 12:54:50.893597] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:08.428 [2024-12-05 12:54:50.893646] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:08.428 [2024-12-05 12:54:50.893662] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:08.428 [2024-12-05 12:54:50.893672] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:08.428 [2024-12-05 12:54:50.895751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:08.428 [2024-12-05 12:54:50.895785] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:08.428 spare 00:23:08.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:23:08.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.428 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.428 [2024-12-05 12:54:50.901653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:08.428 [2024-12-05 12:54:50.903422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:08.428 [2024-12-05 12:54:50.903594] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:08.428 [2024-12-05 12:54:50.903615] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:08.428 [2024-12-05 12:54:50.903850] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:08.428 [2024-12-05 12:54:50.904011] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:08.428 [2024-12-05 12:54:50.904027] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:08.428 [2024-12-05 12:54:50.904162] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:08.429 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.429 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:08.429 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:08.429 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:08.429 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:08.429 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:08.429 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:08.429 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:08.429 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:08.429 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:08.429 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:08.429 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.429 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:08.429 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.429 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.429 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.429 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:08.429 "name": "raid_bdev1", 00:23:08.429 "uuid": "3a08b0bb-609e-4296-8e4c-ffe0bfb97110", 00:23:08.429 "strip_size_kb": 0, 00:23:08.429 "state": "online", 00:23:08.429 "raid_level": "raid1", 00:23:08.429 "superblock": true, 00:23:08.429 "num_base_bdevs": 2, 00:23:08.429 "num_base_bdevs_discovered": 2, 00:23:08.429 "num_base_bdevs_operational": 2, 00:23:08.429 "base_bdevs_list": [ 00:23:08.429 { 00:23:08.429 "name": "BaseBdev1", 00:23:08.429 "uuid": "aee26c1c-45b6-5766-ab75-656714a81183", 00:23:08.429 "is_configured": true, 00:23:08.429 "data_offset": 2048, 00:23:08.429 "data_size": 63488 00:23:08.429 }, 00:23:08.429 { 00:23:08.429 "name": "BaseBdev2", 00:23:08.429 "uuid": "d2ee2dc9-d025-5c98-bacd-f3a4eb7eb5c6", 00:23:08.429 "is_configured": true, 00:23:08.429 "data_offset": 2048, 00:23:08.429 "data_size": 63488 00:23:08.429 } 00:23:08.429 ] 00:23:08.429 }' 00:23:08.429 12:54:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:08.429 12:54:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.688 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:23:08.688 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:08.688 12:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.688 12:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.688 [2024-12-05 12:54:51.226001] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:08.688 12:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.688 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:23:08.688 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.688 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:08.688 12:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.688 12:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.688 12:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.999 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:23:08.999 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:23:08.999 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:23:08.999 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:23:08.999 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:23:08.999 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:08.999 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:08.999 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:08.999 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:08.999 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:08.999 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:23:08.999 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:08.999 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:08.999 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:08.999 [2024-12-05 12:54:51.477790] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:09.257 /dev/nbd0 00:23:09.257 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:09.257 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:09.257 12:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:09.257 12:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:23:09.257 12:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:09.257 12:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:09.257 12:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:09.257 12:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:23:09.257 12:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:09.257 12:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:09.257 12:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:09.257 1+0 records in 00:23:09.257 1+0 records out 00:23:09.257 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264846 s, 15.5 MB/s 00:23:09.257 12:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:09.257 12:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:23:09.257 12:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:09.257 12:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:09.257 12:54:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:23:09.257 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:09.257 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:09.257 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:23:09.257 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:23:09.257 12:54:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:23:13.441 63488+0 records in 00:23:13.441 63488+0 records out 00:23:13.441 32505856 bytes (33 MB, 31 MiB) copied, 4.20275 s, 7.7 MB/s 00:23:13.441 12:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:23:13.441 12:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:13.441 12:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:13.441 12:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:13.441 12:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:23:13.441 12:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:13.441 12:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:13.441 [2024-12-05 12:54:55.931955] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:13.441 12:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:13.441 12:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:13.441 12:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:13.441 12:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:13.441 12:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:13.441 12:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:13.441 12:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:13.441 12:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:13.441 12:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:23:13.441 12:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.441 12:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:13.441 [2024-12-05 12:54:55.968610] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:13.441 12:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.441 12:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:13.441 12:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:13.441 12:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:13.441 12:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:13.441 12:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:13.441 12:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:13.441 12:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:13.441 12:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:13.441 12:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:13.441 12:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:13.441 12:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.441 12:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:13.441 12:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.441 12:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:13.441 12:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.441 12:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:13.441 "name": "raid_bdev1", 00:23:13.441 "uuid": "3a08b0bb-609e-4296-8e4c-ffe0bfb97110", 00:23:13.441 "strip_size_kb": 0, 00:23:13.441 "state": "online", 00:23:13.441 "raid_level": "raid1", 00:23:13.441 "superblock": true, 00:23:13.441 "num_base_bdevs": 2, 00:23:13.441 "num_base_bdevs_discovered": 1, 00:23:13.441 "num_base_bdevs_operational": 1, 00:23:13.441 "base_bdevs_list": [ 00:23:13.441 { 00:23:13.441 "name": null, 00:23:13.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:13.441 "is_configured": false, 00:23:13.441 "data_offset": 0, 00:23:13.441 "data_size": 63488 00:23:13.441 }, 00:23:13.441 { 00:23:13.441 "name": "BaseBdev2", 00:23:13.441 "uuid": "d2ee2dc9-d025-5c98-bacd-f3a4eb7eb5c6", 00:23:13.441 "is_configured": true, 00:23:13.441 "data_offset": 2048, 00:23:13.441 "data_size": 63488 00:23:13.441 } 00:23:13.441 ] 00:23:13.441 }' 00:23:13.441 12:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:13.441 12:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:13.698 12:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:13.698 12:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.698 12:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:13.698 [2024-12-05 12:54:56.276693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:13.955 [2024-12-05 12:54:56.286159] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:23:13.955 12:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.955 12:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:23:13.955 [2024-12-05 12:54:56.287794] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:14.943 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:14.943 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:14.943 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:14.943 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:14.943 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:14.943 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:14.943 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:14.943 12:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.943 12:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:14.943 12:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.943 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:14.943 "name": "raid_bdev1", 00:23:14.943 "uuid": "3a08b0bb-609e-4296-8e4c-ffe0bfb97110", 00:23:14.943 "strip_size_kb": 0, 00:23:14.943 "state": "online", 00:23:14.943 "raid_level": "raid1", 00:23:14.943 "superblock": true, 00:23:14.943 "num_base_bdevs": 2, 00:23:14.943 "num_base_bdevs_discovered": 2, 00:23:14.943 "num_base_bdevs_operational": 2, 00:23:14.944 "process": { 00:23:14.944 "type": "rebuild", 00:23:14.944 "target": "spare", 00:23:14.944 "progress": { 00:23:14.944 "blocks": 20480, 00:23:14.944 "percent": 32 00:23:14.944 } 00:23:14.944 }, 00:23:14.944 "base_bdevs_list": [ 00:23:14.944 { 00:23:14.944 "name": "spare", 00:23:14.944 "uuid": "9a6e8e1c-5afc-5a3b-9d54-36b31512a304", 00:23:14.944 "is_configured": true, 00:23:14.944 "data_offset": 2048, 00:23:14.944 "data_size": 63488 00:23:14.944 }, 00:23:14.944 { 00:23:14.944 "name": "BaseBdev2", 00:23:14.944 "uuid": "d2ee2dc9-d025-5c98-bacd-f3a4eb7eb5c6", 00:23:14.944 "is_configured": true, 00:23:14.944 "data_offset": 2048, 00:23:14.944 "data_size": 63488 00:23:14.944 } 00:23:14.944 ] 00:23:14.944 }' 00:23:14.944 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:14.944 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:14.944 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:14.944 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:14.944 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:14.944 12:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.944 12:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:14.944 [2024-12-05 12:54:57.394053] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:14.944 [2024-12-05 12:54:57.493116] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:14.944 [2024-12-05 12:54:57.493314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:14.944 [2024-12-05 12:54:57.493366] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:14.944 [2024-12-05 12:54:57.493379] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:14.944 12:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.944 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:14.944 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:14.944 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:14.944 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:14.944 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:14.944 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:14.944 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:14.944 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:14.944 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:14.944 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:14.944 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:14.944 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:14.944 12:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.944 12:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:15.201 12:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.201 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:15.201 "name": "raid_bdev1", 00:23:15.201 "uuid": "3a08b0bb-609e-4296-8e4c-ffe0bfb97110", 00:23:15.201 "strip_size_kb": 0, 00:23:15.201 "state": "online", 00:23:15.201 "raid_level": "raid1", 00:23:15.201 "superblock": true, 00:23:15.201 "num_base_bdevs": 2, 00:23:15.201 "num_base_bdevs_discovered": 1, 00:23:15.201 "num_base_bdevs_operational": 1, 00:23:15.201 "base_bdevs_list": [ 00:23:15.201 { 00:23:15.201 "name": null, 00:23:15.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.201 "is_configured": false, 00:23:15.201 "data_offset": 0, 00:23:15.201 "data_size": 63488 00:23:15.201 }, 00:23:15.201 { 00:23:15.201 "name": "BaseBdev2", 00:23:15.201 "uuid": "d2ee2dc9-d025-5c98-bacd-f3a4eb7eb5c6", 00:23:15.201 "is_configured": true, 00:23:15.201 "data_offset": 2048, 00:23:15.201 "data_size": 63488 00:23:15.201 } 00:23:15.201 ] 00:23:15.201 }' 00:23:15.201 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:15.201 12:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:15.458 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:15.458 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:15.458 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:15.458 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:15.458 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:15.458 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:15.458 12:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.458 12:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:15.458 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:15.458 12:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.458 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:15.458 "name": "raid_bdev1", 00:23:15.458 "uuid": "3a08b0bb-609e-4296-8e4c-ffe0bfb97110", 00:23:15.458 "strip_size_kb": 0, 00:23:15.459 "state": "online", 00:23:15.459 "raid_level": "raid1", 00:23:15.459 "superblock": true, 00:23:15.459 "num_base_bdevs": 2, 00:23:15.459 "num_base_bdevs_discovered": 1, 00:23:15.459 "num_base_bdevs_operational": 1, 00:23:15.459 "base_bdevs_list": [ 00:23:15.459 { 00:23:15.459 "name": null, 00:23:15.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.459 "is_configured": false, 00:23:15.459 "data_offset": 0, 00:23:15.459 "data_size": 63488 00:23:15.459 }, 00:23:15.459 { 00:23:15.459 "name": "BaseBdev2", 00:23:15.459 "uuid": "d2ee2dc9-d025-5c98-bacd-f3a4eb7eb5c6", 00:23:15.459 "is_configured": true, 00:23:15.459 "data_offset": 2048, 00:23:15.459 "data_size": 63488 00:23:15.459 } 00:23:15.459 ] 00:23:15.459 }' 00:23:15.459 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:15.459 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:15.459 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:15.459 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:15.459 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:15.459 12:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.459 12:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:15.459 [2024-12-05 12:54:57.920325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:15.459 [2024-12-05 12:54:57.929758] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:23:15.459 12:54:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.459 12:54:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:23:15.459 [2024-12-05 12:54:57.931296] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:16.391 12:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:16.391 12:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:16.391 12:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:16.391 12:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:16.391 12:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:16.391 12:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:16.391 12:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:16.391 12:54:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.391 12:54:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:16.391 12:54:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.391 12:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:16.391 "name": "raid_bdev1", 00:23:16.391 "uuid": "3a08b0bb-609e-4296-8e4c-ffe0bfb97110", 00:23:16.391 "strip_size_kb": 0, 00:23:16.391 "state": "online", 00:23:16.391 "raid_level": "raid1", 00:23:16.391 "superblock": true, 00:23:16.391 "num_base_bdevs": 2, 00:23:16.391 "num_base_bdevs_discovered": 2, 00:23:16.391 "num_base_bdevs_operational": 2, 00:23:16.391 "process": { 00:23:16.391 "type": "rebuild", 00:23:16.391 "target": "spare", 00:23:16.391 "progress": { 00:23:16.391 "blocks": 20480, 00:23:16.391 "percent": 32 00:23:16.391 } 00:23:16.391 }, 00:23:16.391 "base_bdevs_list": [ 00:23:16.391 { 00:23:16.391 "name": "spare", 00:23:16.391 "uuid": "9a6e8e1c-5afc-5a3b-9d54-36b31512a304", 00:23:16.391 "is_configured": true, 00:23:16.391 "data_offset": 2048, 00:23:16.391 "data_size": 63488 00:23:16.391 }, 00:23:16.391 { 00:23:16.391 "name": "BaseBdev2", 00:23:16.391 "uuid": "d2ee2dc9-d025-5c98-bacd-f3a4eb7eb5c6", 00:23:16.391 "is_configured": true, 00:23:16.391 "data_offset": 2048, 00:23:16.391 "data_size": 63488 00:23:16.391 } 00:23:16.391 ] 00:23:16.391 }' 00:23:16.391 12:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:16.650 12:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:16.650 12:54:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:16.650 12:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:16.650 12:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:23:16.650 12:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:23:16.650 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:23:16.650 12:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:23:16.650 12:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:23:16.650 12:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:23:16.650 12:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=292 00:23:16.650 12:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:16.650 12:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:16.650 12:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:16.650 12:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:16.650 12:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:16.650 12:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:16.650 12:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:16.650 12:54:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.650 12:54:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:16.650 12:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:16.650 12:54:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.650 12:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:16.650 "name": "raid_bdev1", 00:23:16.650 "uuid": "3a08b0bb-609e-4296-8e4c-ffe0bfb97110", 00:23:16.650 "strip_size_kb": 0, 00:23:16.650 "state": "online", 00:23:16.650 "raid_level": "raid1", 00:23:16.650 "superblock": true, 00:23:16.650 "num_base_bdevs": 2, 00:23:16.650 "num_base_bdevs_discovered": 2, 00:23:16.650 "num_base_bdevs_operational": 2, 00:23:16.650 "process": { 00:23:16.650 "type": "rebuild", 00:23:16.650 "target": "spare", 00:23:16.650 "progress": { 00:23:16.650 "blocks": 22528, 00:23:16.650 "percent": 35 00:23:16.650 } 00:23:16.650 }, 00:23:16.650 "base_bdevs_list": [ 00:23:16.650 { 00:23:16.650 "name": "spare", 00:23:16.650 "uuid": "9a6e8e1c-5afc-5a3b-9d54-36b31512a304", 00:23:16.650 "is_configured": true, 00:23:16.650 "data_offset": 2048, 00:23:16.650 "data_size": 63488 00:23:16.650 }, 00:23:16.650 { 00:23:16.650 "name": "BaseBdev2", 00:23:16.650 "uuid": "d2ee2dc9-d025-5c98-bacd-f3a4eb7eb5c6", 00:23:16.650 "is_configured": true, 00:23:16.650 "data_offset": 2048, 00:23:16.650 "data_size": 63488 00:23:16.650 } 00:23:16.650 ] 00:23:16.650 }' 00:23:16.650 12:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:16.650 12:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:16.650 12:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:16.650 12:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:16.650 12:54:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:17.584 12:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:17.584 12:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:17.584 12:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:17.584 12:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:17.584 12:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:17.584 12:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:17.584 12:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:17.584 12:55:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.584 12:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.584 12:55:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:17.584 12:55:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.842 12:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:17.842 "name": "raid_bdev1", 00:23:17.842 "uuid": "3a08b0bb-609e-4296-8e4c-ffe0bfb97110", 00:23:17.842 "strip_size_kb": 0, 00:23:17.842 "state": "online", 00:23:17.842 "raid_level": "raid1", 00:23:17.842 "superblock": true, 00:23:17.842 "num_base_bdevs": 2, 00:23:17.842 "num_base_bdevs_discovered": 2, 00:23:17.842 "num_base_bdevs_operational": 2, 00:23:17.842 "process": { 00:23:17.842 "type": "rebuild", 00:23:17.842 "target": "spare", 00:23:17.842 "progress": { 00:23:17.842 "blocks": 45056, 00:23:17.842 "percent": 70 00:23:17.842 } 00:23:17.842 }, 00:23:17.842 "base_bdevs_list": [ 00:23:17.842 { 00:23:17.842 "name": "spare", 00:23:17.842 "uuid": "9a6e8e1c-5afc-5a3b-9d54-36b31512a304", 00:23:17.842 "is_configured": true, 00:23:17.842 "data_offset": 2048, 00:23:17.842 "data_size": 63488 00:23:17.842 }, 00:23:17.842 { 00:23:17.842 "name": "BaseBdev2", 00:23:17.842 "uuid": "d2ee2dc9-d025-5c98-bacd-f3a4eb7eb5c6", 00:23:17.842 "is_configured": true, 00:23:17.842 "data_offset": 2048, 00:23:17.842 "data_size": 63488 00:23:17.842 } 00:23:17.842 ] 00:23:17.842 }' 00:23:17.842 12:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:17.842 12:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:17.842 12:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:17.842 12:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:17.842 12:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:18.523 [2024-12-05 12:55:01.045395] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:18.523 [2024-12-05 12:55:01.045464] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:18.523 [2024-12-05 12:55:01.045585] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:18.780 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:18.780 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:18.780 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:18.780 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:18.780 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:18.780 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:18.780 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:18.780 12:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.780 12:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.780 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.780 12:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.780 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:18.780 "name": "raid_bdev1", 00:23:18.780 "uuid": "3a08b0bb-609e-4296-8e4c-ffe0bfb97110", 00:23:18.780 "strip_size_kb": 0, 00:23:18.780 "state": "online", 00:23:18.780 "raid_level": "raid1", 00:23:18.780 "superblock": true, 00:23:18.780 "num_base_bdevs": 2, 00:23:18.780 "num_base_bdevs_discovered": 2, 00:23:18.780 "num_base_bdevs_operational": 2, 00:23:18.780 "base_bdevs_list": [ 00:23:18.780 { 00:23:18.780 "name": "spare", 00:23:18.780 "uuid": "9a6e8e1c-5afc-5a3b-9d54-36b31512a304", 00:23:18.780 "is_configured": true, 00:23:18.780 "data_offset": 2048, 00:23:18.780 "data_size": 63488 00:23:18.780 }, 00:23:18.780 { 00:23:18.780 "name": "BaseBdev2", 00:23:18.780 "uuid": "d2ee2dc9-d025-5c98-bacd-f3a4eb7eb5c6", 00:23:18.780 "is_configured": true, 00:23:18.780 "data_offset": 2048, 00:23:18.780 "data_size": 63488 00:23:18.780 } 00:23:18.780 ] 00:23:18.780 }' 00:23:18.780 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:18.780 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:18.780 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:18.780 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:23:18.780 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:23:18.780 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:18.780 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:18.780 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:18.780 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:18.780 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:18.780 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:18.780 12:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.780 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.780 12:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.780 12:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.780 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:18.780 "name": "raid_bdev1", 00:23:18.780 "uuid": "3a08b0bb-609e-4296-8e4c-ffe0bfb97110", 00:23:18.780 "strip_size_kb": 0, 00:23:18.780 "state": "online", 00:23:18.780 "raid_level": "raid1", 00:23:18.780 "superblock": true, 00:23:18.780 "num_base_bdevs": 2, 00:23:18.780 "num_base_bdevs_discovered": 2, 00:23:18.780 "num_base_bdevs_operational": 2, 00:23:18.780 "base_bdevs_list": [ 00:23:18.780 { 00:23:18.780 "name": "spare", 00:23:18.780 "uuid": "9a6e8e1c-5afc-5a3b-9d54-36b31512a304", 00:23:18.780 "is_configured": true, 00:23:18.780 "data_offset": 2048, 00:23:18.780 "data_size": 63488 00:23:18.780 }, 00:23:18.780 { 00:23:18.780 "name": "BaseBdev2", 00:23:18.780 "uuid": "d2ee2dc9-d025-5c98-bacd-f3a4eb7eb5c6", 00:23:18.780 "is_configured": true, 00:23:18.780 "data_offset": 2048, 00:23:18.780 "data_size": 63488 00:23:18.780 } 00:23:18.780 ] 00:23:18.780 }' 00:23:18.780 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:19.038 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:19.038 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:19.038 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:19.038 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:19.038 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:19.038 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:19.038 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:19.039 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:19.039 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:19.039 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:19.039 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:19.039 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:19.039 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:19.039 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:19.039 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.039 12:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.039 12:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:19.039 12:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.039 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:19.039 "name": "raid_bdev1", 00:23:19.039 "uuid": "3a08b0bb-609e-4296-8e4c-ffe0bfb97110", 00:23:19.039 "strip_size_kb": 0, 00:23:19.039 "state": "online", 00:23:19.039 "raid_level": "raid1", 00:23:19.039 "superblock": true, 00:23:19.039 "num_base_bdevs": 2, 00:23:19.039 "num_base_bdevs_discovered": 2, 00:23:19.039 "num_base_bdevs_operational": 2, 00:23:19.039 "base_bdevs_list": [ 00:23:19.039 { 00:23:19.039 "name": "spare", 00:23:19.039 "uuid": "9a6e8e1c-5afc-5a3b-9d54-36b31512a304", 00:23:19.039 "is_configured": true, 00:23:19.039 "data_offset": 2048, 00:23:19.039 "data_size": 63488 00:23:19.039 }, 00:23:19.039 { 00:23:19.039 "name": "BaseBdev2", 00:23:19.039 "uuid": "d2ee2dc9-d025-5c98-bacd-f3a4eb7eb5c6", 00:23:19.039 "is_configured": true, 00:23:19.039 "data_offset": 2048, 00:23:19.039 "data_size": 63488 00:23:19.039 } 00:23:19.039 ] 00:23:19.039 }' 00:23:19.039 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:19.039 12:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:19.297 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:19.297 12:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.297 12:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:19.297 [2024-12-05 12:55:01.728020] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:19.297 [2024-12-05 12:55:01.728048] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:19.297 [2024-12-05 12:55:01.728115] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:19.297 [2024-12-05 12:55:01.728175] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:19.297 [2024-12-05 12:55:01.728185] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:19.297 12:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.297 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:19.297 12:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.297 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:23:19.297 12:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:19.297 12:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.297 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:23:19.297 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:23:19.297 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:23:19.297 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:19.297 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:19.297 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:19.297 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:19.297 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:19.297 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:19.297 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:23:19.297 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:19.297 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:19.297 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:19.555 /dev/nbd0 00:23:19.555 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:19.555 12:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:19.555 12:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:19.555 12:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:23:19.555 12:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:19.555 12:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:19.555 12:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:19.555 12:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:23:19.555 12:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:19.556 12:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:19.556 12:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:19.556 1+0 records in 00:23:19.556 1+0 records out 00:23:19.556 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242069 s, 16.9 MB/s 00:23:19.556 12:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:19.556 12:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:23:19.556 12:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:19.556 12:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:19.556 12:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:23:19.556 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:19.556 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:19.556 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:23:19.814 /dev/nbd1 00:23:19.814 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:19.814 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:19.814 12:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:23:19.814 12:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:23:19.814 12:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:19.814 12:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:19.814 12:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:23:19.814 12:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:23:19.814 12:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:19.814 12:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:19.814 12:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:19.814 1+0 records in 00:23:19.814 1+0 records out 00:23:19.814 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318173 s, 12.9 MB/s 00:23:19.814 12:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:19.814 12:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:23:19.814 12:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:19.814 12:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:19.814 12:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:23:19.814 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:19.814 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:19.814 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:19.814 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:23:19.814 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:19.814 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:19.814 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:19.814 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:23:19.814 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:19.814 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:20.073 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:20.073 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:20.073 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:20.073 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:20.073 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:20.073 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:20.073 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:20.073 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:20.073 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:20.073 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:23:20.331 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:20.331 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:20.331 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:20.331 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:20.331 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:20.331 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:20.331 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:20.331 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:20.331 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:23:20.331 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:23:20.331 12:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.331 12:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.331 12:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.331 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:20.331 12:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.331 12:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.331 [2024-12-05 12:55:02.720847] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:20.331 [2024-12-05 12:55:02.720993] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:20.331 [2024-12-05 12:55:02.721019] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:20.331 [2024-12-05 12:55:02.721028] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:20.331 [2024-12-05 12:55:02.722873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:20.331 [2024-12-05 12:55:02.722904] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:20.331 [2024-12-05 12:55:02.722983] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:20.331 [2024-12-05 12:55:02.723018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:20.331 [2024-12-05 12:55:02.723124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:20.331 spare 00:23:20.331 12:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.331 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:23:20.331 12:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.331 12:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.331 [2024-12-05 12:55:02.823200] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:23:20.331 [2024-12-05 12:55:02.823238] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:20.331 [2024-12-05 12:55:02.823520] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:23:20.331 [2024-12-05 12:55:02.823689] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:23:20.331 [2024-12-05 12:55:02.823696] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:23:20.331 [2024-12-05 12:55:02.823834] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:20.331 12:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.331 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:20.331 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:20.331 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:20.331 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:20.331 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:20.331 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:20.331 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:20.331 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:20.331 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:20.331 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:20.332 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:20.332 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:20.332 12:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.332 12:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.332 12:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.332 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:20.332 "name": "raid_bdev1", 00:23:20.332 "uuid": "3a08b0bb-609e-4296-8e4c-ffe0bfb97110", 00:23:20.332 "strip_size_kb": 0, 00:23:20.332 "state": "online", 00:23:20.332 "raid_level": "raid1", 00:23:20.332 "superblock": true, 00:23:20.332 "num_base_bdevs": 2, 00:23:20.332 "num_base_bdevs_discovered": 2, 00:23:20.332 "num_base_bdevs_operational": 2, 00:23:20.332 "base_bdevs_list": [ 00:23:20.332 { 00:23:20.332 "name": "spare", 00:23:20.332 "uuid": "9a6e8e1c-5afc-5a3b-9d54-36b31512a304", 00:23:20.332 "is_configured": true, 00:23:20.332 "data_offset": 2048, 00:23:20.332 "data_size": 63488 00:23:20.332 }, 00:23:20.332 { 00:23:20.332 "name": "BaseBdev2", 00:23:20.332 "uuid": "d2ee2dc9-d025-5c98-bacd-f3a4eb7eb5c6", 00:23:20.332 "is_configured": true, 00:23:20.332 "data_offset": 2048, 00:23:20.332 "data_size": 63488 00:23:20.332 } 00:23:20.332 ] 00:23:20.332 }' 00:23:20.332 12:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:20.332 12:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.589 12:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:20.589 12:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:20.589 12:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:20.589 12:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:20.589 12:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:20.589 12:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:20.589 12:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.589 12:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.589 12:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:20.589 12:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.589 12:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:20.589 "name": "raid_bdev1", 00:23:20.589 "uuid": "3a08b0bb-609e-4296-8e4c-ffe0bfb97110", 00:23:20.589 "strip_size_kb": 0, 00:23:20.589 "state": "online", 00:23:20.589 "raid_level": "raid1", 00:23:20.589 "superblock": true, 00:23:20.589 "num_base_bdevs": 2, 00:23:20.589 "num_base_bdevs_discovered": 2, 00:23:20.589 "num_base_bdevs_operational": 2, 00:23:20.589 "base_bdevs_list": [ 00:23:20.589 { 00:23:20.589 "name": "spare", 00:23:20.589 "uuid": "9a6e8e1c-5afc-5a3b-9d54-36b31512a304", 00:23:20.589 "is_configured": true, 00:23:20.589 "data_offset": 2048, 00:23:20.589 "data_size": 63488 00:23:20.589 }, 00:23:20.589 { 00:23:20.589 "name": "BaseBdev2", 00:23:20.589 "uuid": "d2ee2dc9-d025-5c98-bacd-f3a4eb7eb5c6", 00:23:20.589 "is_configured": true, 00:23:20.589 "data_offset": 2048, 00:23:20.589 "data_size": 63488 00:23:20.589 } 00:23:20.589 ] 00:23:20.589 }' 00:23:20.589 12:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:20.847 12:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:20.847 12:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:20.847 12:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:20.847 12:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:20.847 12:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:20.847 12:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.847 12:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.847 12:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.847 12:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:23:20.847 12:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:20.847 12:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.847 12:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.847 [2024-12-05 12:55:03.261005] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:20.847 12:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.847 12:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:20.847 12:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:20.847 12:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:20.847 12:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:20.847 12:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:20.847 12:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:20.847 12:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:20.848 12:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:20.848 12:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:20.848 12:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:20.848 12:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:20.848 12:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.848 12:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.848 12:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:20.848 12:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.848 12:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:20.848 "name": "raid_bdev1", 00:23:20.848 "uuid": "3a08b0bb-609e-4296-8e4c-ffe0bfb97110", 00:23:20.848 "strip_size_kb": 0, 00:23:20.848 "state": "online", 00:23:20.848 "raid_level": "raid1", 00:23:20.848 "superblock": true, 00:23:20.848 "num_base_bdevs": 2, 00:23:20.848 "num_base_bdevs_discovered": 1, 00:23:20.848 "num_base_bdevs_operational": 1, 00:23:20.848 "base_bdevs_list": [ 00:23:20.848 { 00:23:20.848 "name": null, 00:23:20.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.848 "is_configured": false, 00:23:20.848 "data_offset": 0, 00:23:20.848 "data_size": 63488 00:23:20.848 }, 00:23:20.848 { 00:23:20.848 "name": "BaseBdev2", 00:23:20.848 "uuid": "d2ee2dc9-d025-5c98-bacd-f3a4eb7eb5c6", 00:23:20.848 "is_configured": true, 00:23:20.848 "data_offset": 2048, 00:23:20.848 "data_size": 63488 00:23:20.848 } 00:23:20.848 ] 00:23:20.848 }' 00:23:20.848 12:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:20.848 12:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:21.104 12:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:21.104 12:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.104 12:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:21.104 [2024-12-05 12:55:03.597087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:21.104 [2024-12-05 12:55:03.597247] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:21.104 [2024-12-05 12:55:03.597262] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:21.104 [2024-12-05 12:55:03.597300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:21.104 [2024-12-05 12:55:03.606309] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:23:21.104 12:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.104 12:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:23:21.104 [2024-12-05 12:55:03.607955] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:22.033 12:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:22.033 12:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:22.033 12:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:22.033 12:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:22.033 12:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:22.290 12:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.290 12:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.290 12:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.291 12:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.291 12:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.291 12:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:22.291 "name": "raid_bdev1", 00:23:22.291 "uuid": "3a08b0bb-609e-4296-8e4c-ffe0bfb97110", 00:23:22.291 "strip_size_kb": 0, 00:23:22.291 "state": "online", 00:23:22.291 "raid_level": "raid1", 00:23:22.291 "superblock": true, 00:23:22.291 "num_base_bdevs": 2, 00:23:22.291 "num_base_bdevs_discovered": 2, 00:23:22.291 "num_base_bdevs_operational": 2, 00:23:22.291 "process": { 00:23:22.291 "type": "rebuild", 00:23:22.291 "target": "spare", 00:23:22.291 "progress": { 00:23:22.291 "blocks": 20480, 00:23:22.291 "percent": 32 00:23:22.291 } 00:23:22.291 }, 00:23:22.291 "base_bdevs_list": [ 00:23:22.291 { 00:23:22.291 "name": "spare", 00:23:22.291 "uuid": "9a6e8e1c-5afc-5a3b-9d54-36b31512a304", 00:23:22.291 "is_configured": true, 00:23:22.291 "data_offset": 2048, 00:23:22.291 "data_size": 63488 00:23:22.291 }, 00:23:22.291 { 00:23:22.291 "name": "BaseBdev2", 00:23:22.291 "uuid": "d2ee2dc9-d025-5c98-bacd-f3a4eb7eb5c6", 00:23:22.291 "is_configured": true, 00:23:22.291 "data_offset": 2048, 00:23:22.291 "data_size": 63488 00:23:22.291 } 00:23:22.291 ] 00:23:22.291 }' 00:23:22.291 12:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:22.291 12:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:22.291 12:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:22.291 12:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:22.291 12:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:23:22.291 12:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.291 12:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.291 [2024-12-05 12:55:04.718254] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:22.291 [2024-12-05 12:55:04.813249] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:22.291 [2024-12-05 12:55:04.813310] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:22.291 [2024-12-05 12:55:04.813322] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:22.291 [2024-12-05 12:55:04.813331] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:22.291 12:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.291 12:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:22.291 12:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:22.291 12:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:22.291 12:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:22.291 12:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:22.291 12:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:22.291 12:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:22.291 12:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:22.291 12:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:22.291 12:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:22.291 12:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.291 12:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.291 12:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.291 12:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.291 12:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.291 12:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:22.291 "name": "raid_bdev1", 00:23:22.291 "uuid": "3a08b0bb-609e-4296-8e4c-ffe0bfb97110", 00:23:22.291 "strip_size_kb": 0, 00:23:22.291 "state": "online", 00:23:22.291 "raid_level": "raid1", 00:23:22.291 "superblock": true, 00:23:22.291 "num_base_bdevs": 2, 00:23:22.291 "num_base_bdevs_discovered": 1, 00:23:22.291 "num_base_bdevs_operational": 1, 00:23:22.291 "base_bdevs_list": [ 00:23:22.291 { 00:23:22.291 "name": null, 00:23:22.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.291 "is_configured": false, 00:23:22.291 "data_offset": 0, 00:23:22.291 "data_size": 63488 00:23:22.291 }, 00:23:22.291 { 00:23:22.291 "name": "BaseBdev2", 00:23:22.291 "uuid": "d2ee2dc9-d025-5c98-bacd-f3a4eb7eb5c6", 00:23:22.291 "is_configured": true, 00:23:22.291 "data_offset": 2048, 00:23:22.291 "data_size": 63488 00:23:22.291 } 00:23:22.291 ] 00:23:22.291 }' 00:23:22.291 12:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:22.291 12:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.606 12:55:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:22.606 12:55:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.606 12:55:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.606 [2024-12-05 12:55:05.183853] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:22.606 [2024-12-05 12:55:05.184053] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:22.606 [2024-12-05 12:55:05.184075] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:23:22.606 [2024-12-05 12:55:05.184084] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:22.606 [2024-12-05 12:55:05.184445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:22.606 [2024-12-05 12:55:05.184466] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:22.606 [2024-12-05 12:55:05.184548] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:22.606 [2024-12-05 12:55:05.184559] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:22.606 [2024-12-05 12:55:05.184567] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:22.606 [2024-12-05 12:55:05.184588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:22.863 [2024-12-05 12:55:05.193216] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:23:22.863 spare 00:23:22.863 12:55:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.863 12:55:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:23:22.863 [2024-12-05 12:55:05.194763] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:23.801 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:23.801 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:23.801 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:23.801 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:23.801 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:23.801 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:23.801 12:55:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.801 12:55:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.801 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.801 12:55:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.801 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:23.801 "name": "raid_bdev1", 00:23:23.801 "uuid": "3a08b0bb-609e-4296-8e4c-ffe0bfb97110", 00:23:23.801 "strip_size_kb": 0, 00:23:23.801 "state": "online", 00:23:23.801 "raid_level": "raid1", 00:23:23.801 "superblock": true, 00:23:23.801 "num_base_bdevs": 2, 00:23:23.801 "num_base_bdevs_discovered": 2, 00:23:23.801 "num_base_bdevs_operational": 2, 00:23:23.801 "process": { 00:23:23.801 "type": "rebuild", 00:23:23.801 "target": "spare", 00:23:23.801 "progress": { 00:23:23.801 "blocks": 20480, 00:23:23.801 "percent": 32 00:23:23.801 } 00:23:23.801 }, 00:23:23.801 "base_bdevs_list": [ 00:23:23.801 { 00:23:23.801 "name": "spare", 00:23:23.801 "uuid": "9a6e8e1c-5afc-5a3b-9d54-36b31512a304", 00:23:23.801 "is_configured": true, 00:23:23.801 "data_offset": 2048, 00:23:23.801 "data_size": 63488 00:23:23.801 }, 00:23:23.801 { 00:23:23.801 "name": "BaseBdev2", 00:23:23.801 "uuid": "d2ee2dc9-d025-5c98-bacd-f3a4eb7eb5c6", 00:23:23.801 "is_configured": true, 00:23:23.801 "data_offset": 2048, 00:23:23.801 "data_size": 63488 00:23:23.801 } 00:23:23.801 ] 00:23:23.801 }' 00:23:23.801 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:23.801 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:23.801 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:23.801 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:23.801 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:23:23.801 12:55:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.801 12:55:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.801 [2024-12-05 12:55:06.296958] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:23.801 [2024-12-05 12:55:06.299469] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:23.801 [2024-12-05 12:55:06.299608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:23.801 [2024-12-05 12:55:06.299700] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:23.801 [2024-12-05 12:55:06.299719] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:23.801 12:55:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.801 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:23.801 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:23.801 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:23.801 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:23.801 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:23.801 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:23.802 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:23.802 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:23.802 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:23.802 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:23.802 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:23.802 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.802 12:55:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.802 12:55:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.802 12:55:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.802 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:23.802 "name": "raid_bdev1", 00:23:23.802 "uuid": "3a08b0bb-609e-4296-8e4c-ffe0bfb97110", 00:23:23.802 "strip_size_kb": 0, 00:23:23.802 "state": "online", 00:23:23.802 "raid_level": "raid1", 00:23:23.802 "superblock": true, 00:23:23.802 "num_base_bdevs": 2, 00:23:23.802 "num_base_bdevs_discovered": 1, 00:23:23.802 "num_base_bdevs_operational": 1, 00:23:23.802 "base_bdevs_list": [ 00:23:23.802 { 00:23:23.802 "name": null, 00:23:23.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.802 "is_configured": false, 00:23:23.802 "data_offset": 0, 00:23:23.802 "data_size": 63488 00:23:23.802 }, 00:23:23.802 { 00:23:23.802 "name": "BaseBdev2", 00:23:23.802 "uuid": "d2ee2dc9-d025-5c98-bacd-f3a4eb7eb5c6", 00:23:23.802 "is_configured": true, 00:23:23.802 "data_offset": 2048, 00:23:23.802 "data_size": 63488 00:23:23.802 } 00:23:23.802 ] 00:23:23.802 }' 00:23:23.802 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:23.802 12:55:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.366 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:24.366 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:24.366 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:24.366 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:24.366 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:24.366 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:24.366 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:24.366 12:55:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.366 12:55:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.366 12:55:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.366 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:24.366 "name": "raid_bdev1", 00:23:24.366 "uuid": "3a08b0bb-609e-4296-8e4c-ffe0bfb97110", 00:23:24.366 "strip_size_kb": 0, 00:23:24.366 "state": "online", 00:23:24.366 "raid_level": "raid1", 00:23:24.366 "superblock": true, 00:23:24.366 "num_base_bdevs": 2, 00:23:24.366 "num_base_bdevs_discovered": 1, 00:23:24.366 "num_base_bdevs_operational": 1, 00:23:24.366 "base_bdevs_list": [ 00:23:24.366 { 00:23:24.366 "name": null, 00:23:24.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.366 "is_configured": false, 00:23:24.366 "data_offset": 0, 00:23:24.366 "data_size": 63488 00:23:24.366 }, 00:23:24.366 { 00:23:24.366 "name": "BaseBdev2", 00:23:24.366 "uuid": "d2ee2dc9-d025-5c98-bacd-f3a4eb7eb5c6", 00:23:24.366 "is_configured": true, 00:23:24.366 "data_offset": 2048, 00:23:24.366 "data_size": 63488 00:23:24.366 } 00:23:24.366 ] 00:23:24.366 }' 00:23:24.366 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:24.366 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:24.366 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:24.366 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:24.366 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:23:24.366 12:55:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.366 12:55:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.366 12:55:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.366 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:24.366 12:55:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.366 12:55:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.366 [2024-12-05 12:55:06.750204] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:24.366 [2024-12-05 12:55:06.750337] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:24.366 [2024-12-05 12:55:06.750363] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:23:24.366 [2024-12-05 12:55:06.750372] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:24.366 [2024-12-05 12:55:06.750722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:24.366 [2024-12-05 12:55:06.750740] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:24.366 [2024-12-05 12:55:06.750801] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:24.366 [2024-12-05 12:55:06.750811] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:24.366 [2024-12-05 12:55:06.750819] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:24.366 [2024-12-05 12:55:06.750826] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:23:24.366 BaseBdev1 00:23:24.366 12:55:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.366 12:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:23:25.299 12:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:25.299 12:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:25.299 12:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:25.300 12:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:25.300 12:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:25.300 12:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:25.300 12:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:25.300 12:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:25.300 12:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:25.300 12:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:25.300 12:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:25.300 12:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.300 12:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.300 12:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:25.300 12:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.300 12:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:25.300 "name": "raid_bdev1", 00:23:25.300 "uuid": "3a08b0bb-609e-4296-8e4c-ffe0bfb97110", 00:23:25.300 "strip_size_kb": 0, 00:23:25.300 "state": "online", 00:23:25.300 "raid_level": "raid1", 00:23:25.300 "superblock": true, 00:23:25.300 "num_base_bdevs": 2, 00:23:25.300 "num_base_bdevs_discovered": 1, 00:23:25.300 "num_base_bdevs_operational": 1, 00:23:25.300 "base_bdevs_list": [ 00:23:25.300 { 00:23:25.300 "name": null, 00:23:25.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.300 "is_configured": false, 00:23:25.300 "data_offset": 0, 00:23:25.300 "data_size": 63488 00:23:25.300 }, 00:23:25.300 { 00:23:25.300 "name": "BaseBdev2", 00:23:25.300 "uuid": "d2ee2dc9-d025-5c98-bacd-f3a4eb7eb5c6", 00:23:25.300 "is_configured": true, 00:23:25.300 "data_offset": 2048, 00:23:25.300 "data_size": 63488 00:23:25.300 } 00:23:25.300 ] 00:23:25.300 }' 00:23:25.300 12:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:25.300 12:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.558 12:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:25.558 12:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:25.558 12:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:25.558 12:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:25.558 12:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:25.558 12:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:25.558 12:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:25.558 12:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.558 12:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.558 12:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.558 12:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:25.558 "name": "raid_bdev1", 00:23:25.558 "uuid": "3a08b0bb-609e-4296-8e4c-ffe0bfb97110", 00:23:25.558 "strip_size_kb": 0, 00:23:25.558 "state": "online", 00:23:25.558 "raid_level": "raid1", 00:23:25.558 "superblock": true, 00:23:25.558 "num_base_bdevs": 2, 00:23:25.558 "num_base_bdevs_discovered": 1, 00:23:25.558 "num_base_bdevs_operational": 1, 00:23:25.558 "base_bdevs_list": [ 00:23:25.558 { 00:23:25.558 "name": null, 00:23:25.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.558 "is_configured": false, 00:23:25.558 "data_offset": 0, 00:23:25.558 "data_size": 63488 00:23:25.558 }, 00:23:25.558 { 00:23:25.558 "name": "BaseBdev2", 00:23:25.558 "uuid": "d2ee2dc9-d025-5c98-bacd-f3a4eb7eb5c6", 00:23:25.558 "is_configured": true, 00:23:25.558 "data_offset": 2048, 00:23:25.558 "data_size": 63488 00:23:25.558 } 00:23:25.558 ] 00:23:25.558 }' 00:23:25.558 12:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:25.815 12:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:25.815 12:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:25.815 12:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:25.815 12:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:25.815 12:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:23:25.815 12:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:25.815 12:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:25.815 12:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:25.815 12:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:25.815 12:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:25.815 12:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:25.815 12:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.816 12:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.816 [2024-12-05 12:55:08.186538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:25.816 [2024-12-05 12:55:08.186654] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:25.816 [2024-12-05 12:55:08.186666] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:25.816 request: 00:23:25.816 { 00:23:25.816 "base_bdev": "BaseBdev1", 00:23:25.816 "raid_bdev": "raid_bdev1", 00:23:25.816 "method": "bdev_raid_add_base_bdev", 00:23:25.816 "req_id": 1 00:23:25.816 } 00:23:25.816 Got JSON-RPC error response 00:23:25.816 response: 00:23:25.816 { 00:23:25.816 "code": -22, 00:23:25.816 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:25.816 } 00:23:25.816 12:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:25.816 12:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:23:25.816 12:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:25.816 12:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:25.816 12:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:25.816 12:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:23:26.795 12:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:26.795 12:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:26.795 12:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:26.795 12:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:26.795 12:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:26.795 12:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:26.795 12:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:26.795 12:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:26.795 12:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:26.795 12:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:26.795 12:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:26.795 12:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.795 12:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.795 12:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:26.795 12:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.795 12:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:26.795 "name": "raid_bdev1", 00:23:26.795 "uuid": "3a08b0bb-609e-4296-8e4c-ffe0bfb97110", 00:23:26.795 "strip_size_kb": 0, 00:23:26.795 "state": "online", 00:23:26.795 "raid_level": "raid1", 00:23:26.795 "superblock": true, 00:23:26.795 "num_base_bdevs": 2, 00:23:26.795 "num_base_bdevs_discovered": 1, 00:23:26.795 "num_base_bdevs_operational": 1, 00:23:26.795 "base_bdevs_list": [ 00:23:26.795 { 00:23:26.795 "name": null, 00:23:26.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.795 "is_configured": false, 00:23:26.795 "data_offset": 0, 00:23:26.795 "data_size": 63488 00:23:26.795 }, 00:23:26.795 { 00:23:26.795 "name": "BaseBdev2", 00:23:26.795 "uuid": "d2ee2dc9-d025-5c98-bacd-f3a4eb7eb5c6", 00:23:26.795 "is_configured": true, 00:23:26.795 "data_offset": 2048, 00:23:26.795 "data_size": 63488 00:23:26.795 } 00:23:26.795 ] 00:23:26.795 }' 00:23:26.795 12:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:26.795 12:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.053 12:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:27.053 12:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:27.053 12:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:27.053 12:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:27.053 12:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:27.053 12:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:27.053 12:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.053 12:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.053 12:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:27.053 12:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.053 12:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:27.053 "name": "raid_bdev1", 00:23:27.053 "uuid": "3a08b0bb-609e-4296-8e4c-ffe0bfb97110", 00:23:27.053 "strip_size_kb": 0, 00:23:27.053 "state": "online", 00:23:27.053 "raid_level": "raid1", 00:23:27.053 "superblock": true, 00:23:27.053 "num_base_bdevs": 2, 00:23:27.053 "num_base_bdevs_discovered": 1, 00:23:27.053 "num_base_bdevs_operational": 1, 00:23:27.053 "base_bdevs_list": [ 00:23:27.053 { 00:23:27.053 "name": null, 00:23:27.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.053 "is_configured": false, 00:23:27.053 "data_offset": 0, 00:23:27.053 "data_size": 63488 00:23:27.053 }, 00:23:27.053 { 00:23:27.053 "name": "BaseBdev2", 00:23:27.053 "uuid": "d2ee2dc9-d025-5c98-bacd-f3a4eb7eb5c6", 00:23:27.053 "is_configured": true, 00:23:27.053 "data_offset": 2048, 00:23:27.053 "data_size": 63488 00:23:27.053 } 00:23:27.053 ] 00:23:27.053 }' 00:23:27.053 12:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:27.053 12:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:27.053 12:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:27.053 12:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:27.053 12:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 73483 00:23:27.053 12:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73483 ']' 00:23:27.053 12:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 73483 00:23:27.053 12:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:23:27.053 12:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:27.053 12:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73483 00:23:27.310 killing process with pid 73483 00:23:27.310 Received shutdown signal, test time was about 60.000000 seconds 00:23:27.310 00:23:27.310 Latency(us) 00:23:27.310 [2024-12-05T12:55:09.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.310 [2024-12-05T12:55:09.897Z] =================================================================================================================== 00:23:27.310 [2024-12-05T12:55:09.897Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:27.310 12:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:27.310 12:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:27.310 12:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73483' 00:23:27.310 12:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 73483 00:23:27.310 [2024-12-05 12:55:09.640318] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:27.310 12:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 73483 00:23:27.310 [2024-12-05 12:55:09.640408] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:27.310 [2024-12-05 12:55:09.640448] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:27.310 [2024-12-05 12:55:09.640457] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:23:27.311 [2024-12-05 12:55:09.786726] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:27.875 ************************************ 00:23:27.875 END TEST raid_rebuild_test_sb 00:23:27.875 ************************************ 00:23:27.875 12:55:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:23:27.875 00:23:27.875 real 0m20.505s 00:23:27.875 user 0m24.100s 00:23:27.875 sys 0m2.996s 00:23:27.875 12:55:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:27.875 12:55:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.875 12:55:10 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:23:27.875 12:55:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:23:27.875 12:55:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:27.875 12:55:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:27.875 ************************************ 00:23:27.875 START TEST raid_rebuild_test_io 00:23:27.875 ************************************ 00:23:27.876 12:55:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:23:27.876 12:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:23:27.876 12:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:23:27.876 12:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:23:27.876 12:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:23:27.876 12:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:23:27.876 12:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:23:27.876 12:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:27.876 12:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:23:27.876 12:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:27.876 12:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:27.876 12:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:23:27.876 12:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:27.876 12:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:27.876 12:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:27.876 12:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:23:27.876 12:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:23:27.876 12:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:23:27.876 12:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:23:27.876 12:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:23:27.876 12:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:23:27.876 12:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:23:27.876 12:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:23:27.876 12:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:23:27.876 12:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=74194 00:23:27.876 12:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 74194 00:23:27.876 12:55:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 74194 ']' 00:23:27.876 12:55:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:27.876 12:55:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.876 12:55:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:27.876 12:55:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.876 12:55:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:27.876 12:55:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:28.134 [2024-12-05 12:55:10.468763] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:23:28.134 [2024-12-05 12:55:10.469014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:23:28.134 Zero copy mechanism will not be used. 00:23:28.134 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74194 ] 00:23:28.134 [2024-12-05 12:55:10.624458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.134 [2024-12-05 12:55:10.705896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.392 [2024-12-05 12:55:10.817025] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:28.392 [2024-12-05 12:55:10.817189] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:28.958 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:28.958 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:23:28.958 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:28.958 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:28.958 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.958 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:28.958 BaseBdev1_malloc 00:23:28.958 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.958 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:28.958 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.958 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:28.958 [2024-12-05 12:55:11.308133] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:28.958 [2024-12-05 12:55:11.308299] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:28.958 [2024-12-05 12:55:11.308323] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:28.958 [2024-12-05 12:55:11.308332] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:28.958 [2024-12-05 12:55:11.310508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:28.958 [2024-12-05 12:55:11.310545] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:28.958 BaseBdev1 00:23:28.958 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.958 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:28.958 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:28.958 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.958 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:28.958 BaseBdev2_malloc 00:23:28.958 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.958 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:28.958 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.958 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:28.958 [2024-12-05 12:55:11.339588] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:28.958 [2024-12-05 12:55:11.339634] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:28.958 [2024-12-05 12:55:11.339653] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:28.959 [2024-12-05 12:55:11.339662] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:28.959 [2024-12-05 12:55:11.341362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:28.959 [2024-12-05 12:55:11.341395] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:28.959 BaseBdev2 00:23:28.959 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.959 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:23:28.959 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.959 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:28.959 spare_malloc 00:23:28.959 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.959 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:28.959 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.959 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:28.959 spare_delay 00:23:28.959 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.959 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:28.959 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.959 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:28.959 [2024-12-05 12:55:11.393078] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:28.959 [2024-12-05 12:55:11.393124] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:28.959 [2024-12-05 12:55:11.393140] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:28.959 [2024-12-05 12:55:11.393150] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:28.959 [2024-12-05 12:55:11.394915] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:28.959 [2024-12-05 12:55:11.395035] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:28.959 spare 00:23:28.959 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.959 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:23:28.959 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.959 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:28.959 [2024-12-05 12:55:11.401124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:28.959 [2024-12-05 12:55:11.402687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:28.959 [2024-12-05 12:55:11.402766] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:28.959 [2024-12-05 12:55:11.402779] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:23:28.959 [2024-12-05 12:55:11.402988] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:28.959 [2024-12-05 12:55:11.403114] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:28.959 [2024-12-05 12:55:11.403123] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:28.959 [2024-12-05 12:55:11.403241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:28.959 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.959 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:28.959 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:28.959 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:28.959 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:28.959 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:28.959 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:28.959 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:28.959 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:28.959 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:28.959 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:28.959 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:28.959 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.959 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:28.959 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:28.959 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.959 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:28.959 "name": "raid_bdev1", 00:23:28.959 "uuid": "03cbc33f-fdfa-4227-8a70-9903cf1295fa", 00:23:28.959 "strip_size_kb": 0, 00:23:28.959 "state": "online", 00:23:28.959 "raid_level": "raid1", 00:23:28.959 "superblock": false, 00:23:28.959 "num_base_bdevs": 2, 00:23:28.959 "num_base_bdevs_discovered": 2, 00:23:28.959 "num_base_bdevs_operational": 2, 00:23:28.959 "base_bdevs_list": [ 00:23:28.959 { 00:23:28.959 "name": "BaseBdev1", 00:23:28.959 "uuid": "c7c8ad0b-ff08-5e00-848c-c10b975140a5", 00:23:28.959 "is_configured": true, 00:23:28.959 "data_offset": 0, 00:23:28.959 "data_size": 65536 00:23:28.959 }, 00:23:28.959 { 00:23:28.959 "name": "BaseBdev2", 00:23:28.959 "uuid": "dbfd0367-7a52-5c87-9339-b7f326f9fb48", 00:23:28.959 "is_configured": true, 00:23:28.959 "data_offset": 0, 00:23:28.959 "data_size": 65536 00:23:28.959 } 00:23:28.959 ] 00:23:28.959 }' 00:23:28.959 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:28.959 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:29.217 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:29.217 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:23:29.217 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.217 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:29.217 [2024-12-05 12:55:11.701420] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:29.217 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.217 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:23:29.217 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:29.217 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:29.217 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.217 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:29.217 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.217 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:23:29.217 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:23:29.217 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:23:29.217 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:23:29.217 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.217 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:29.217 [2024-12-05 12:55:11.761174] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:29.217 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.217 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:29.217 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:29.217 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:29.217 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:29.217 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:29.217 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:29.217 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:29.217 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:29.217 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:29.217 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:29.217 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:29.217 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.217 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:29.217 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.217 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.217 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:29.217 "name": "raid_bdev1", 00:23:29.217 "uuid": "03cbc33f-fdfa-4227-8a70-9903cf1295fa", 00:23:29.217 "strip_size_kb": 0, 00:23:29.217 "state": "online", 00:23:29.217 "raid_level": "raid1", 00:23:29.217 "superblock": false, 00:23:29.217 "num_base_bdevs": 2, 00:23:29.217 "num_base_bdevs_discovered": 1, 00:23:29.217 "num_base_bdevs_operational": 1, 00:23:29.217 "base_bdevs_list": [ 00:23:29.217 { 00:23:29.217 "name": null, 00:23:29.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.217 "is_configured": false, 00:23:29.217 "data_offset": 0, 00:23:29.217 "data_size": 65536 00:23:29.217 }, 00:23:29.217 { 00:23:29.217 "name": "BaseBdev2", 00:23:29.217 "uuid": "dbfd0367-7a52-5c87-9339-b7f326f9fb48", 00:23:29.217 "is_configured": true, 00:23:29.217 "data_offset": 0, 00:23:29.217 "data_size": 65536 00:23:29.217 } 00:23:29.217 ] 00:23:29.217 }' 00:23:29.217 12:55:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:29.217 12:55:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:29.474 [2024-12-05 12:55:11.849636] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:23:29.474 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:29.474 Zero copy mechanism will not be used. 00:23:29.474 Running I/O for 60 seconds... 00:23:29.732 12:55:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:29.732 12:55:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.732 12:55:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:29.732 [2024-12-05 12:55:12.062191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:29.732 12:55:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.732 12:55:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:23:29.732 [2024-12-05 12:55:12.110584] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:29.732 [2024-12-05 12:55:12.112182] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:29.732 [2024-12-05 12:55:12.228654] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:29.732 [2024-12-05 12:55:12.229047] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:29.989 [2024-12-05 12:55:12.436979] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:29.989 [2024-12-05 12:55:12.437196] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:30.246 [2024-12-05 12:55:12.767578] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:30.504 153.00 IOPS, 459.00 MiB/s [2024-12-05T12:55:13.091Z] [2024-12-05 12:55:12.879966] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:30.762 [2024-12-05 12:55:13.093322] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:30.762 [2024-12-05 12:55:13.093733] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:30.762 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:30.762 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:30.762 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:30.762 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:30.762 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:30.762 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:30.762 12:55:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.762 12:55:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:30.762 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:30.762 12:55:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.762 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:30.762 "name": "raid_bdev1", 00:23:30.762 "uuid": "03cbc33f-fdfa-4227-8a70-9903cf1295fa", 00:23:30.762 "strip_size_kb": 0, 00:23:30.762 "state": "online", 00:23:30.762 "raid_level": "raid1", 00:23:30.762 "superblock": false, 00:23:30.762 "num_base_bdevs": 2, 00:23:30.762 "num_base_bdevs_discovered": 2, 00:23:30.762 "num_base_bdevs_operational": 2, 00:23:30.762 "process": { 00:23:30.762 "type": "rebuild", 00:23:30.762 "target": "spare", 00:23:30.762 "progress": { 00:23:30.762 "blocks": 14336, 00:23:30.762 "percent": 21 00:23:30.762 } 00:23:30.762 }, 00:23:30.762 "base_bdevs_list": [ 00:23:30.762 { 00:23:30.762 "name": "spare", 00:23:30.762 "uuid": "f1a29795-1b94-5387-8bcd-f37baa393cd3", 00:23:30.762 "is_configured": true, 00:23:30.762 "data_offset": 0, 00:23:30.762 "data_size": 65536 00:23:30.762 }, 00:23:30.762 { 00:23:30.762 "name": "BaseBdev2", 00:23:30.762 "uuid": "dbfd0367-7a52-5c87-9339-b7f326f9fb48", 00:23:30.762 "is_configured": true, 00:23:30.762 "data_offset": 0, 00:23:30.762 "data_size": 65536 00:23:30.762 } 00:23:30.762 ] 00:23:30.762 }' 00:23:30.762 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:30.762 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:30.762 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:30.762 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:30.762 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:30.762 12:55:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.762 12:55:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:30.762 [2024-12-05 12:55:13.206678] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:30.762 [2024-12-05 12:55:13.307287] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:30.762 [2024-12-05 12:55:13.323732] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:30.762 [2024-12-05 12:55:13.325555] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:30.762 [2024-12-05 12:55:13.325657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:30.762 [2024-12-05 12:55:13.325670] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:31.019 [2024-12-05 12:55:13.346455] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:23:31.019 12:55:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.019 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:31.020 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:31.020 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:31.020 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:31.020 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:31.020 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:31.020 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:31.020 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:31.020 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:31.020 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:31.020 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.020 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.020 12:55:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.020 12:55:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:31.020 12:55:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.020 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:31.020 "name": "raid_bdev1", 00:23:31.020 "uuid": "03cbc33f-fdfa-4227-8a70-9903cf1295fa", 00:23:31.020 "strip_size_kb": 0, 00:23:31.020 "state": "online", 00:23:31.020 "raid_level": "raid1", 00:23:31.020 "superblock": false, 00:23:31.020 "num_base_bdevs": 2, 00:23:31.020 "num_base_bdevs_discovered": 1, 00:23:31.020 "num_base_bdevs_operational": 1, 00:23:31.020 "base_bdevs_list": [ 00:23:31.020 { 00:23:31.020 "name": null, 00:23:31.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.020 "is_configured": false, 00:23:31.020 "data_offset": 0, 00:23:31.020 "data_size": 65536 00:23:31.020 }, 00:23:31.020 { 00:23:31.020 "name": "BaseBdev2", 00:23:31.020 "uuid": "dbfd0367-7a52-5c87-9339-b7f326f9fb48", 00:23:31.020 "is_configured": true, 00:23:31.020 "data_offset": 0, 00:23:31.020 "data_size": 65536 00:23:31.020 } 00:23:31.020 ] 00:23:31.020 }' 00:23:31.020 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:31.020 12:55:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:31.278 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:31.278 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:31.278 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:31.278 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:31.278 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:31.278 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.278 12:55:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.278 12:55:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:31.278 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.278 12:55:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.278 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:31.278 "name": "raid_bdev1", 00:23:31.278 "uuid": "03cbc33f-fdfa-4227-8a70-9903cf1295fa", 00:23:31.278 "strip_size_kb": 0, 00:23:31.278 "state": "online", 00:23:31.278 "raid_level": "raid1", 00:23:31.278 "superblock": false, 00:23:31.278 "num_base_bdevs": 2, 00:23:31.278 "num_base_bdevs_discovered": 1, 00:23:31.278 "num_base_bdevs_operational": 1, 00:23:31.278 "base_bdevs_list": [ 00:23:31.278 { 00:23:31.278 "name": null, 00:23:31.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.278 "is_configured": false, 00:23:31.278 "data_offset": 0, 00:23:31.278 "data_size": 65536 00:23:31.278 }, 00:23:31.278 { 00:23:31.278 "name": "BaseBdev2", 00:23:31.278 "uuid": "dbfd0367-7a52-5c87-9339-b7f326f9fb48", 00:23:31.278 "is_configured": true, 00:23:31.278 "data_offset": 0, 00:23:31.278 "data_size": 65536 00:23:31.278 } 00:23:31.278 ] 00:23:31.278 }' 00:23:31.278 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:31.278 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:31.278 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:31.278 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:31.278 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:31.278 12:55:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.278 12:55:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:31.278 [2024-12-05 12:55:13.803465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:31.278 12:55:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.278 12:55:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:23:31.278 [2024-12-05 12:55:13.842320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:23:31.278 [2024-12-05 12:55:13.843928] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:31.542 173.50 IOPS, 520.50 MiB/s [2024-12-05T12:55:14.129Z] [2024-12-05 12:55:13.962020] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:31.542 [2024-12-05 12:55:13.962391] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:31.542 [2024-12-05 12:55:14.064835] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:31.542 [2024-12-05 12:55:14.065183] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:32.107 [2024-12-05 12:55:14.392326] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:32.107 [2024-12-05 12:55:14.508938] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:32.107 [2024-12-05 12:55:14.509265] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:32.364 12:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:32.364 12:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:32.364 12:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:32.364 12:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:32.364 12:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:32.364 [2024-12-05 12:55:14.831118] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:32.364 12:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:32.364 12:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.364 12:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:32.364 12:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:32.364 12:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.364 12:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:32.364 "name": "raid_bdev1", 00:23:32.364 "uuid": "03cbc33f-fdfa-4227-8a70-9903cf1295fa", 00:23:32.364 "strip_size_kb": 0, 00:23:32.364 "state": "online", 00:23:32.364 "raid_level": "raid1", 00:23:32.364 "superblock": false, 00:23:32.364 "num_base_bdevs": 2, 00:23:32.364 "num_base_bdevs_discovered": 2, 00:23:32.364 "num_base_bdevs_operational": 2, 00:23:32.364 "process": { 00:23:32.364 "type": "rebuild", 00:23:32.364 "target": "spare", 00:23:32.364 "progress": { 00:23:32.364 "blocks": 14336, 00:23:32.364 "percent": 21 00:23:32.364 } 00:23:32.364 }, 00:23:32.364 "base_bdevs_list": [ 00:23:32.364 { 00:23:32.364 "name": "spare", 00:23:32.364 "uuid": "f1a29795-1b94-5387-8bcd-f37baa393cd3", 00:23:32.364 "is_configured": true, 00:23:32.364 "data_offset": 0, 00:23:32.364 "data_size": 65536 00:23:32.364 }, 00:23:32.364 { 00:23:32.364 "name": "BaseBdev2", 00:23:32.364 "uuid": "dbfd0367-7a52-5c87-9339-b7f326f9fb48", 00:23:32.364 "is_configured": true, 00:23:32.364 "data_offset": 0, 00:23:32.364 "data_size": 65536 00:23:32.364 } 00:23:32.364 ] 00:23:32.364 }' 00:23:32.364 12:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:32.364 147.00 IOPS, 441.00 MiB/s [2024-12-05T12:55:14.951Z] 12:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:32.364 12:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:32.364 12:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:32.364 12:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:23:32.364 12:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:23:32.364 12:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:23:32.364 12:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:23:32.364 12:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=307 00:23:32.364 12:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:32.364 12:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:32.364 12:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:32.364 12:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:32.364 12:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:32.364 12:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:32.364 12:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:32.364 12:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.364 12:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:32.364 12:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:32.364 12:55:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.364 12:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:32.364 "name": "raid_bdev1", 00:23:32.364 "uuid": "03cbc33f-fdfa-4227-8a70-9903cf1295fa", 00:23:32.364 "strip_size_kb": 0, 00:23:32.364 "state": "online", 00:23:32.364 "raid_level": "raid1", 00:23:32.364 "superblock": false, 00:23:32.364 "num_base_bdevs": 2, 00:23:32.364 "num_base_bdevs_discovered": 2, 00:23:32.364 "num_base_bdevs_operational": 2, 00:23:32.364 "process": { 00:23:32.364 "type": "rebuild", 00:23:32.364 "target": "spare", 00:23:32.364 "progress": { 00:23:32.364 "blocks": 14336, 00:23:32.364 "percent": 21 00:23:32.364 } 00:23:32.364 }, 00:23:32.364 "base_bdevs_list": [ 00:23:32.364 { 00:23:32.364 "name": "spare", 00:23:32.364 "uuid": "f1a29795-1b94-5387-8bcd-f37baa393cd3", 00:23:32.364 "is_configured": true, 00:23:32.364 "data_offset": 0, 00:23:32.364 "data_size": 65536 00:23:32.364 }, 00:23:32.364 { 00:23:32.364 "name": "BaseBdev2", 00:23:32.364 "uuid": "dbfd0367-7a52-5c87-9339-b7f326f9fb48", 00:23:32.364 "is_configured": true, 00:23:32.364 "data_offset": 0, 00:23:32.364 "data_size": 65536 00:23:32.364 } 00:23:32.364 ] 00:23:32.364 }' 00:23:32.364 12:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:32.623 12:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:32.623 12:55:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:32.623 12:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:32.623 12:55:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:32.623 [2024-12-05 12:55:15.044641] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:32.879 [2024-12-05 12:55:15.296909] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:32.879 [2024-12-05 12:55:15.297220] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:32.879 [2024-12-05 12:55:15.415944] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:23:33.135 [2024-12-05 12:55:15.646579] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:23:33.392 [2024-12-05 12:55:15.747579] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:23:33.392 [2024-12-05 12:55:15.747807] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:23:33.650 128.50 IOPS, 385.50 MiB/s [2024-12-05T12:55:16.237Z] 12:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:33.650 12:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:33.650 12:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:33.650 12:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:33.650 12:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:33.650 12:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:33.650 12:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:33.650 12:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.650 12:55:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.650 12:55:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:33.650 12:55:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.650 12:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:33.650 "name": "raid_bdev1", 00:23:33.650 "uuid": "03cbc33f-fdfa-4227-8a70-9903cf1295fa", 00:23:33.650 "strip_size_kb": 0, 00:23:33.650 "state": "online", 00:23:33.650 "raid_level": "raid1", 00:23:33.650 "superblock": false, 00:23:33.650 "num_base_bdevs": 2, 00:23:33.650 "num_base_bdevs_discovered": 2, 00:23:33.650 "num_base_bdevs_operational": 2, 00:23:33.650 "process": { 00:23:33.650 "type": "rebuild", 00:23:33.650 "target": "spare", 00:23:33.650 "progress": { 00:23:33.650 "blocks": 30720, 00:23:33.650 "percent": 46 00:23:33.650 } 00:23:33.650 }, 00:23:33.650 "base_bdevs_list": [ 00:23:33.650 { 00:23:33.650 "name": "spare", 00:23:33.650 "uuid": "f1a29795-1b94-5387-8bcd-f37baa393cd3", 00:23:33.650 "is_configured": true, 00:23:33.650 "data_offset": 0, 00:23:33.650 "data_size": 65536 00:23:33.650 }, 00:23:33.650 { 00:23:33.650 "name": "BaseBdev2", 00:23:33.650 "uuid": "dbfd0367-7a52-5c87-9339-b7f326f9fb48", 00:23:33.650 "is_configured": true, 00:23:33.650 "data_offset": 0, 00:23:33.650 "data_size": 65536 00:23:33.650 } 00:23:33.650 ] 00:23:33.650 }' 00:23:33.650 12:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:33.650 12:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:33.650 12:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:33.650 [2024-12-05 12:55:16.091151] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:23:33.650 12:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:33.650 12:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:34.216 [2024-12-05 12:55:16.622435] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:23:34.732 110.00 IOPS, 330.00 MiB/s [2024-12-05T12:55:17.319Z] 12:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:34.733 12:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:34.733 12:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:34.733 12:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:34.733 12:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:34.733 12:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:34.733 12:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:34.733 12:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:34.733 12:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.733 12:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:34.733 12:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.733 12:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:34.733 "name": "raid_bdev1", 00:23:34.733 "uuid": "03cbc33f-fdfa-4227-8a70-9903cf1295fa", 00:23:34.733 "strip_size_kb": 0, 00:23:34.733 "state": "online", 00:23:34.733 "raid_level": "raid1", 00:23:34.733 "superblock": false, 00:23:34.733 "num_base_bdevs": 2, 00:23:34.733 "num_base_bdevs_discovered": 2, 00:23:34.733 "num_base_bdevs_operational": 2, 00:23:34.733 "process": { 00:23:34.733 "type": "rebuild", 00:23:34.733 "target": "spare", 00:23:34.733 "progress": { 00:23:34.733 "blocks": 49152, 00:23:34.733 "percent": 75 00:23:34.733 } 00:23:34.733 }, 00:23:34.733 "base_bdevs_list": [ 00:23:34.733 { 00:23:34.733 "name": "spare", 00:23:34.733 "uuid": "f1a29795-1b94-5387-8bcd-f37baa393cd3", 00:23:34.733 "is_configured": true, 00:23:34.733 "data_offset": 0, 00:23:34.733 "data_size": 65536 00:23:34.733 }, 00:23:34.733 { 00:23:34.733 "name": "BaseBdev2", 00:23:34.733 "uuid": "dbfd0367-7a52-5c87-9339-b7f326f9fb48", 00:23:34.733 "is_configured": true, 00:23:34.733 "data_offset": 0, 00:23:34.733 "data_size": 65536 00:23:34.733 } 00:23:34.733 ] 00:23:34.733 }' 00:23:34.733 12:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:34.733 12:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:34.733 12:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:34.733 12:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:34.733 12:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:34.990 [2024-12-05 12:55:17.501051] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:23:35.555 97.33 IOPS, 292.00 MiB/s [2024-12-05T12:55:18.142Z] [2024-12-05 12:55:18.033408] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:35.555 [2024-12-05 12:55:18.138321] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:35.813 [2024-12-05 12:55:18.139917] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:35.813 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:35.813 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:35.813 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:35.813 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:35.813 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:35.813 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:35.813 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:35.813 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.813 12:55:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.813 12:55:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:35.813 12:55:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.813 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:35.813 "name": "raid_bdev1", 00:23:35.813 "uuid": "03cbc33f-fdfa-4227-8a70-9903cf1295fa", 00:23:35.813 "strip_size_kb": 0, 00:23:35.813 "state": "online", 00:23:35.813 "raid_level": "raid1", 00:23:35.813 "superblock": false, 00:23:35.813 "num_base_bdevs": 2, 00:23:35.813 "num_base_bdevs_discovered": 2, 00:23:35.813 "num_base_bdevs_operational": 2, 00:23:35.813 "base_bdevs_list": [ 00:23:35.813 { 00:23:35.813 "name": "spare", 00:23:35.813 "uuid": "f1a29795-1b94-5387-8bcd-f37baa393cd3", 00:23:35.813 "is_configured": true, 00:23:35.813 "data_offset": 0, 00:23:35.813 "data_size": 65536 00:23:35.813 }, 00:23:35.813 { 00:23:35.813 "name": "BaseBdev2", 00:23:35.813 "uuid": "dbfd0367-7a52-5c87-9339-b7f326f9fb48", 00:23:35.813 "is_configured": true, 00:23:35.813 "data_offset": 0, 00:23:35.813 "data_size": 65536 00:23:35.813 } 00:23:35.813 ] 00:23:35.813 }' 00:23:35.813 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:35.813 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:35.813 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:35.813 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:23:35.813 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:23:35.813 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:35.813 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:35.813 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:35.813 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:35.813 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:35.813 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:35.813 12:55:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.813 12:55:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:35.813 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.813 12:55:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.813 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:35.813 "name": "raid_bdev1", 00:23:35.813 "uuid": "03cbc33f-fdfa-4227-8a70-9903cf1295fa", 00:23:35.813 "strip_size_kb": 0, 00:23:35.813 "state": "online", 00:23:35.813 "raid_level": "raid1", 00:23:35.813 "superblock": false, 00:23:35.813 "num_base_bdevs": 2, 00:23:35.813 "num_base_bdevs_discovered": 2, 00:23:35.813 "num_base_bdevs_operational": 2, 00:23:35.813 "base_bdevs_list": [ 00:23:35.813 { 00:23:35.813 "name": "spare", 00:23:35.813 "uuid": "f1a29795-1b94-5387-8bcd-f37baa393cd3", 00:23:35.813 "is_configured": true, 00:23:35.813 "data_offset": 0, 00:23:35.813 "data_size": 65536 00:23:35.813 }, 00:23:35.813 { 00:23:35.813 "name": "BaseBdev2", 00:23:35.813 "uuid": "dbfd0367-7a52-5c87-9339-b7f326f9fb48", 00:23:35.813 "is_configured": true, 00:23:35.813 "data_offset": 0, 00:23:35.813 "data_size": 65536 00:23:35.813 } 00:23:35.813 ] 00:23:35.813 }' 00:23:35.813 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:35.813 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:35.813 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:36.071 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:36.071 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:36.071 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:36.071 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:36.071 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:36.071 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:36.071 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:36.071 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:36.071 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:36.071 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:36.071 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:36.071 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:36.071 12:55:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.071 12:55:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:36.071 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.071 12:55:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.071 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:36.071 "name": "raid_bdev1", 00:23:36.071 "uuid": "03cbc33f-fdfa-4227-8a70-9903cf1295fa", 00:23:36.071 "strip_size_kb": 0, 00:23:36.071 "state": "online", 00:23:36.071 "raid_level": "raid1", 00:23:36.071 "superblock": false, 00:23:36.071 "num_base_bdevs": 2, 00:23:36.071 "num_base_bdevs_discovered": 2, 00:23:36.071 "num_base_bdevs_operational": 2, 00:23:36.071 "base_bdevs_list": [ 00:23:36.071 { 00:23:36.071 "name": "spare", 00:23:36.071 "uuid": "f1a29795-1b94-5387-8bcd-f37baa393cd3", 00:23:36.071 "is_configured": true, 00:23:36.071 "data_offset": 0, 00:23:36.071 "data_size": 65536 00:23:36.071 }, 00:23:36.071 { 00:23:36.071 "name": "BaseBdev2", 00:23:36.071 "uuid": "dbfd0367-7a52-5c87-9339-b7f326f9fb48", 00:23:36.071 "is_configured": true, 00:23:36.071 "data_offset": 0, 00:23:36.071 "data_size": 65536 00:23:36.071 } 00:23:36.071 ] 00:23:36.071 }' 00:23:36.071 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:36.071 12:55:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:36.329 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:36.329 12:55:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.329 12:55:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:36.329 [2024-12-05 12:55:18.733635] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:36.329 [2024-12-05 12:55:18.733661] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:36.329 00:23:36.329 Latency(us) 00:23:36.329 [2024-12-05T12:55:18.916Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.329 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:23:36.329 raid_bdev1 : 6.93 88.50 265.51 0.00 0.00 14669.77 258.36 107277.39 00:23:36.329 [2024-12-05T12:55:18.916Z] =================================================================================================================== 00:23:36.329 [2024-12-05T12:55:18.916Z] Total : 88.50 265.51 0.00 0.00 14669.77 258.36 107277.39 00:23:36.329 [2024-12-05 12:55:18.789831] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:36.329 { 00:23:36.329 "results": [ 00:23:36.329 { 00:23:36.329 "job": "raid_bdev1", 00:23:36.329 "core_mask": "0x1", 00:23:36.329 "workload": "randrw", 00:23:36.329 "percentage": 50, 00:23:36.329 "status": "finished", 00:23:36.329 "queue_depth": 2, 00:23:36.329 "io_size": 3145728, 00:23:36.329 "runtime": 6.926247, 00:23:36.329 "iops": 88.50391850016322, 00:23:36.329 "mibps": 265.5117555004897, 00:23:36.329 "io_failed": 0, 00:23:36.329 "io_timeout": 0, 00:23:36.329 "avg_latency_us": 14669.768231898606, 00:23:36.329 "min_latency_us": 258.3630769230769, 00:23:36.329 "max_latency_us": 107277.39076923077 00:23:36.329 } 00:23:36.329 ], 00:23:36.329 "core_count": 1 00:23:36.329 } 00:23:36.329 [2024-12-05 12:55:18.789999] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:36.329 [2024-12-05 12:55:18.790071] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:36.329 [2024-12-05 12:55:18.790082] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:36.329 12:55:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.329 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:23:36.329 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:36.329 12:55:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.329 12:55:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:36.329 12:55:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.329 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:23:36.329 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:23:36.329 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:23:36.329 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:23:36.329 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:36.329 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:23:36.329 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:36.329 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:36.329 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:36.329 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:23:36.329 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:36.329 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:36.329 12:55:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:23:36.586 /dev/nbd0 00:23:36.586 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:36.586 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:36.586 12:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:36.586 12:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:23:36.586 12:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:36.586 12:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:36.586 12:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:36.586 12:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:23:36.586 12:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:36.586 12:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:36.586 12:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:36.586 1+0 records in 00:23:36.586 1+0 records out 00:23:36.586 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385986 s, 10.6 MB/s 00:23:36.586 12:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:36.586 12:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:23:36.586 12:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:36.586 12:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:36.586 12:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:23:36.586 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:36.586 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:36.586 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:23:36.586 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:23:36.586 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:23:36.586 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:36.586 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:23:36.586 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:36.586 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:23:36.586 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:36.586 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:23:36.586 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:36.586 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:36.586 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:23:36.845 /dev/nbd1 00:23:36.845 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:36.845 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:36.845 12:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:23:36.845 12:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:23:36.845 12:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:36.845 12:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:36.845 12:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:23:36.845 12:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:23:36.845 12:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:36.845 12:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:36.845 12:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:36.845 1+0 records in 00:23:36.845 1+0 records out 00:23:36.845 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028079 s, 14.6 MB/s 00:23:36.845 12:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:36.845 12:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:23:36.845 12:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:36.845 12:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:36.845 12:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:23:36.845 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:36.845 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:36.845 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:36.845 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:23:36.845 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:36.845 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:23:36.845 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:36.845 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:23:36.845 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:36.845 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:23:37.102 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:37.102 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:37.102 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:37.102 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:37.102 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:37.102 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:37.102 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:23:37.102 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:23:37.102 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:23:37.102 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:37.102 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:37.102 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:37.102 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:23:37.102 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:37.102 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:37.359 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:37.359 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:37.359 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:37.359 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:37.359 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:37.359 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:37.359 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:23:37.359 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:23:37.359 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:23:37.359 12:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 74194 00:23:37.359 12:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 74194 ']' 00:23:37.359 12:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 74194 00:23:37.359 12:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:23:37.359 12:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:37.359 12:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74194 00:23:37.359 12:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:37.359 12:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:37.359 killing process with pid 74194 00:23:37.359 12:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74194' 00:23:37.359 12:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 74194 00:23:37.359 Received shutdown signal, test time was about 8.016602 seconds 00:23:37.359 00:23:37.359 Latency(us) 00:23:37.359 [2024-12-05T12:55:19.946Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.359 [2024-12-05T12:55:19.946Z] =================================================================================================================== 00:23:37.359 [2024-12-05T12:55:19.946Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:37.359 [2024-12-05 12:55:19.867945] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:37.359 12:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 74194 00:23:37.617 [2024-12-05 12:55:19.979880] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:38.182 12:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:23:38.182 00:23:38.182 real 0m10.182s 00:23:38.182 user 0m12.684s 00:23:38.182 sys 0m0.925s 00:23:38.182 12:55:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:38.182 12:55:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:38.182 ************************************ 00:23:38.182 END TEST raid_rebuild_test_io 00:23:38.182 ************************************ 00:23:38.182 12:55:20 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:23:38.182 12:55:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:23:38.182 12:55:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:38.183 12:55:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:38.183 ************************************ 00:23:38.183 START TEST raid_rebuild_test_sb_io 00:23:38.183 ************************************ 00:23:38.183 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:23:38.183 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:23:38.183 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:23:38.183 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:23:38.183 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:23:38.183 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:23:38.183 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:23:38.183 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:38.183 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:23:38.183 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:38.183 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:38.183 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:23:38.183 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:38.183 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:38.183 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:38.183 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:23:38.183 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:23:38.183 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:23:38.183 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:23:38.183 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:23:38.183 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:23:38.183 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:23:38.183 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:23:38.183 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:23:38.183 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:23:38.183 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=74550 00:23:38.183 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 74550 00:23:38.183 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 74550 ']' 00:23:38.183 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.183 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:38.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.183 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.183 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:38.183 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:38.183 12:55:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:38.183 [2024-12-05 12:55:20.684763] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:23:38.183 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:38.183 Zero copy mechanism will not be used. 00:23:38.183 [2024-12-05 12:55:20.684880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74550 ] 00:23:38.441 [2024-12-05 12:55:20.841620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.441 [2024-12-05 12:55:20.922392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.698 [2024-12-05 12:55:21.031077] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:38.698 [2024-12-05 12:55:21.031120] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:38.954 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:38.954 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:23:38.954 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:38.954 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:38.954 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.954 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:39.212 BaseBdev1_malloc 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:39.212 [2024-12-05 12:55:21.564322] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:39.212 [2024-12-05 12:55:21.564373] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:39.212 [2024-12-05 12:55:21.564390] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:39.212 [2024-12-05 12:55:21.564399] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:39.212 [2024-12-05 12:55:21.566143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:39.212 [2024-12-05 12:55:21.566178] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:39.212 BaseBdev1 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:39.212 BaseBdev2_malloc 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:39.212 [2024-12-05 12:55:21.595973] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:39.212 [2024-12-05 12:55:21.596019] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:39.212 [2024-12-05 12:55:21.596038] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:39.212 [2024-12-05 12:55:21.596046] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:39.212 [2024-12-05 12:55:21.597782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:39.212 [2024-12-05 12:55:21.597812] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:39.212 BaseBdev2 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:39.212 spare_malloc 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:39.212 spare_delay 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:39.212 [2024-12-05 12:55:21.648938] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:39.212 [2024-12-05 12:55:21.648984] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:39.212 [2024-12-05 12:55:21.648999] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:39.212 [2024-12-05 12:55:21.649008] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:39.212 [2024-12-05 12:55:21.650737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:39.212 [2024-12-05 12:55:21.650767] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:39.212 spare 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:39.212 [2024-12-05 12:55:21.656983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:39.212 [2024-12-05 12:55:21.658461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:39.212 [2024-12-05 12:55:21.658610] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:39.212 [2024-12-05 12:55:21.658627] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:39.212 [2024-12-05 12:55:21.658822] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:39.212 [2024-12-05 12:55:21.658947] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:39.212 [2024-12-05 12:55:21.658960] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:39.212 [2024-12-05 12:55:21.659070] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:39.212 "name": "raid_bdev1", 00:23:39.212 "uuid": "cabb9ce1-0364-4b7e-b4cb-ef002247ac40", 00:23:39.212 "strip_size_kb": 0, 00:23:39.212 "state": "online", 00:23:39.212 "raid_level": "raid1", 00:23:39.212 "superblock": true, 00:23:39.212 "num_base_bdevs": 2, 00:23:39.212 "num_base_bdevs_discovered": 2, 00:23:39.212 "num_base_bdevs_operational": 2, 00:23:39.212 "base_bdevs_list": [ 00:23:39.212 { 00:23:39.212 "name": "BaseBdev1", 00:23:39.212 "uuid": "0c5e1fe7-2732-580c-8c3c-f3bcd90dae67", 00:23:39.212 "is_configured": true, 00:23:39.212 "data_offset": 2048, 00:23:39.212 "data_size": 63488 00:23:39.212 }, 00:23:39.212 { 00:23:39.212 "name": "BaseBdev2", 00:23:39.212 "uuid": "69411377-c3b6-5d43-8bc6-57184dc7547f", 00:23:39.212 "is_configured": true, 00:23:39.212 "data_offset": 2048, 00:23:39.212 "data_size": 63488 00:23:39.212 } 00:23:39.212 ] 00:23:39.212 }' 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:39.212 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:39.469 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:39.469 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.469 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:39.469 12:55:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:23:39.469 [2024-12-05 12:55:21.993276] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:39.469 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.469 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:23:39.469 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:39.469 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.469 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:39.469 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:39.469 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.469 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:23:39.469 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:23:39.791 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:23:39.791 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:23:39.791 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.791 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:39.791 [2024-12-05 12:55:22.057014] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:39.791 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.791 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:39.791 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:39.791 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:39.791 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:39.791 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:39.791 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:39.791 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:39.791 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:39.791 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:39.791 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:39.791 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:39.791 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.791 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.791 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:39.791 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.791 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:39.791 "name": "raid_bdev1", 00:23:39.791 "uuid": "cabb9ce1-0364-4b7e-b4cb-ef002247ac40", 00:23:39.791 "strip_size_kb": 0, 00:23:39.791 "state": "online", 00:23:39.791 "raid_level": "raid1", 00:23:39.791 "superblock": true, 00:23:39.791 "num_base_bdevs": 2, 00:23:39.791 "num_base_bdevs_discovered": 1, 00:23:39.791 "num_base_bdevs_operational": 1, 00:23:39.791 "base_bdevs_list": [ 00:23:39.791 { 00:23:39.791 "name": null, 00:23:39.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:39.791 "is_configured": false, 00:23:39.791 "data_offset": 0, 00:23:39.791 "data_size": 63488 00:23:39.791 }, 00:23:39.791 { 00:23:39.791 "name": "BaseBdev2", 00:23:39.791 "uuid": "69411377-c3b6-5d43-8bc6-57184dc7547f", 00:23:39.791 "is_configured": true, 00:23:39.791 "data_offset": 2048, 00:23:39.791 "data_size": 63488 00:23:39.791 } 00:23:39.791 ] 00:23:39.791 }' 00:23:39.791 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:39.791 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:39.791 [2024-12-05 12:55:22.137371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:23:39.791 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:39.791 Zero copy mechanism will not be used. 00:23:39.791 Running I/O for 60 seconds... 00:23:39.791 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:39.791 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.791 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:39.791 [2024-12-05 12:55:22.371903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:40.048 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.048 12:55:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:23:40.048 [2024-12-05 12:55:22.415096] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:40.048 [2024-12-05 12:55:22.416647] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:40.048 [2024-12-05 12:55:22.533175] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:40.048 [2024-12-05 12:55:22.533520] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:40.306 [2024-12-05 12:55:22.746245] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:40.306 [2024-12-05 12:55:22.746468] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:40.563 [2024-12-05 12:55:23.083834] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:40.820 171.00 IOPS, 513.00 MiB/s [2024-12-05T12:55:23.407Z] [2024-12-05 12:55:23.205751] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:41.077 12:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:41.077 12:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:41.077 12:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:41.077 12:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:41.077 12:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:41.077 12:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:41.077 12:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.077 12:55:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.077 12:55:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:41.077 12:55:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.078 [2024-12-05 12:55:23.425321] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:41.078 12:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:41.078 "name": "raid_bdev1", 00:23:41.078 "uuid": "cabb9ce1-0364-4b7e-b4cb-ef002247ac40", 00:23:41.078 "strip_size_kb": 0, 00:23:41.078 "state": "online", 00:23:41.078 "raid_level": "raid1", 00:23:41.078 "superblock": true, 00:23:41.078 "num_base_bdevs": 2, 00:23:41.078 "num_base_bdevs_discovered": 2, 00:23:41.078 "num_base_bdevs_operational": 2, 00:23:41.078 "process": { 00:23:41.078 "type": "rebuild", 00:23:41.078 "target": "spare", 00:23:41.078 "progress": { 00:23:41.078 "blocks": 12288, 00:23:41.078 "percent": 19 00:23:41.078 } 00:23:41.078 }, 00:23:41.078 "base_bdevs_list": [ 00:23:41.078 { 00:23:41.078 "name": "spare", 00:23:41.078 "uuid": "f958daaf-c697-58eb-a429-0eab347276c8", 00:23:41.078 "is_configured": true, 00:23:41.078 "data_offset": 2048, 00:23:41.078 "data_size": 63488 00:23:41.078 }, 00:23:41.078 { 00:23:41.078 "name": "BaseBdev2", 00:23:41.078 "uuid": "69411377-c3b6-5d43-8bc6-57184dc7547f", 00:23:41.078 "is_configured": true, 00:23:41.078 "data_offset": 2048, 00:23:41.078 "data_size": 63488 00:23:41.078 } 00:23:41.078 ] 00:23:41.078 }' 00:23:41.078 12:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:41.078 12:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:41.078 12:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:41.078 12:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:41.078 12:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:41.078 12:55:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.078 12:55:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:41.078 [2024-12-05 12:55:23.515216] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:41.078 [2024-12-05 12:55:23.537604] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:41.078 [2024-12-05 12:55:23.653738] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:41.078 [2024-12-05 12:55:23.660572] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:41.078 [2024-12-05 12:55:23.660604] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:41.078 [2024-12-05 12:55:23.660618] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:41.335 [2024-12-05 12:55:23.691713] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:23:41.335 12:55:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.335 12:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:41.335 12:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:41.335 12:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:41.335 12:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:41.335 12:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:41.335 12:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:41.335 12:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:41.335 12:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:41.335 12:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:41.335 12:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:41.335 12:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:41.335 12:55:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.335 12:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.335 12:55:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:41.335 12:55:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.335 12:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:41.335 "name": "raid_bdev1", 00:23:41.335 "uuid": "cabb9ce1-0364-4b7e-b4cb-ef002247ac40", 00:23:41.335 "strip_size_kb": 0, 00:23:41.335 "state": "online", 00:23:41.335 "raid_level": "raid1", 00:23:41.335 "superblock": true, 00:23:41.335 "num_base_bdevs": 2, 00:23:41.335 "num_base_bdevs_discovered": 1, 00:23:41.335 "num_base_bdevs_operational": 1, 00:23:41.335 "base_bdevs_list": [ 00:23:41.335 { 00:23:41.335 "name": null, 00:23:41.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:41.335 "is_configured": false, 00:23:41.335 "data_offset": 0, 00:23:41.335 "data_size": 63488 00:23:41.335 }, 00:23:41.335 { 00:23:41.335 "name": "BaseBdev2", 00:23:41.335 "uuid": "69411377-c3b6-5d43-8bc6-57184dc7547f", 00:23:41.335 "is_configured": true, 00:23:41.335 "data_offset": 2048, 00:23:41.335 "data_size": 63488 00:23:41.335 } 00:23:41.335 ] 00:23:41.335 }' 00:23:41.335 12:55:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:41.335 12:55:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:41.593 12:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:41.593 12:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:41.593 12:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:41.593 12:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:41.593 12:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:41.593 12:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:41.593 12:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.593 12:55:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.593 12:55:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:41.593 12:55:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.593 12:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:41.593 "name": "raid_bdev1", 00:23:41.593 "uuid": "cabb9ce1-0364-4b7e-b4cb-ef002247ac40", 00:23:41.593 "strip_size_kb": 0, 00:23:41.593 "state": "online", 00:23:41.593 "raid_level": "raid1", 00:23:41.593 "superblock": true, 00:23:41.593 "num_base_bdevs": 2, 00:23:41.593 "num_base_bdevs_discovered": 1, 00:23:41.593 "num_base_bdevs_operational": 1, 00:23:41.593 "base_bdevs_list": [ 00:23:41.593 { 00:23:41.593 "name": null, 00:23:41.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:41.593 "is_configured": false, 00:23:41.593 "data_offset": 0, 00:23:41.593 "data_size": 63488 00:23:41.593 }, 00:23:41.593 { 00:23:41.593 "name": "BaseBdev2", 00:23:41.593 "uuid": "69411377-c3b6-5d43-8bc6-57184dc7547f", 00:23:41.593 "is_configured": true, 00:23:41.593 "data_offset": 2048, 00:23:41.593 "data_size": 63488 00:23:41.593 } 00:23:41.593 ] 00:23:41.593 }' 00:23:41.593 12:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:41.593 12:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:41.593 12:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:41.593 12:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:41.593 12:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:41.593 12:55:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.593 12:55:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:41.593 [2024-12-05 12:55:24.129884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:41.593 190.50 IOPS, 571.50 MiB/s [2024-12-05T12:55:24.180Z] 12:55:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.593 12:55:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:23:41.593 [2024-12-05 12:55:24.174241] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:23:41.593 [2024-12-05 12:55:24.175788] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:41.850 [2024-12-05 12:55:24.292264] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:41.851 [2024-12-05 12:55:24.292585] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:41.851 [2024-12-05 12:55:24.405407] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:41.851 [2024-12-05 12:55:24.405624] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:42.415 [2024-12-05 12:55:24.731853] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:42.415 [2024-12-05 12:55:24.732234] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:42.415 [2024-12-05 12:55:24.954544] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:42.674 161.67 IOPS, 485.00 MiB/s [2024-12-05T12:55:25.261Z] 12:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:42.674 12:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:42.674 12:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:42.674 12:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:42.674 12:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:42.674 12:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:42.674 12:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.674 12:55:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.674 12:55:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:42.674 12:55:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.674 12:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:42.674 "name": "raid_bdev1", 00:23:42.674 "uuid": "cabb9ce1-0364-4b7e-b4cb-ef002247ac40", 00:23:42.674 "strip_size_kb": 0, 00:23:42.674 "state": "online", 00:23:42.674 "raid_level": "raid1", 00:23:42.674 "superblock": true, 00:23:42.674 "num_base_bdevs": 2, 00:23:42.674 "num_base_bdevs_discovered": 2, 00:23:42.674 "num_base_bdevs_operational": 2, 00:23:42.674 "process": { 00:23:42.674 "type": "rebuild", 00:23:42.674 "target": "spare", 00:23:42.674 "progress": { 00:23:42.674 "blocks": 12288, 00:23:42.674 "percent": 19 00:23:42.674 } 00:23:42.674 }, 00:23:42.674 "base_bdevs_list": [ 00:23:42.674 { 00:23:42.674 "name": "spare", 00:23:42.674 "uuid": "f958daaf-c697-58eb-a429-0eab347276c8", 00:23:42.674 "is_configured": true, 00:23:42.674 "data_offset": 2048, 00:23:42.674 "data_size": 63488 00:23:42.674 }, 00:23:42.674 { 00:23:42.674 "name": "BaseBdev2", 00:23:42.674 "uuid": "69411377-c3b6-5d43-8bc6-57184dc7547f", 00:23:42.674 "is_configured": true, 00:23:42.675 "data_offset": 2048, 00:23:42.675 "data_size": 63488 00:23:42.675 } 00:23:42.675 ] 00:23:42.675 }' 00:23:42.675 12:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:42.675 [2024-12-05 12:55:25.213929] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:42.675 [2024-12-05 12:55:25.214367] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:42.675 12:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:42.675 12:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:42.933 12:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:42.933 12:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:23:42.933 12:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:23:42.933 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:23:42.933 12:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:23:42.933 12:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:23:42.933 12:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:23:42.933 12:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=318 00:23:42.933 12:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:42.933 12:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:42.933 12:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:42.933 12:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:42.933 12:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:42.933 12:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:42.933 12:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:42.933 12:55:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.933 12:55:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:42.933 12:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.933 12:55:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.933 12:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:42.933 "name": "raid_bdev1", 00:23:42.933 "uuid": "cabb9ce1-0364-4b7e-b4cb-ef002247ac40", 00:23:42.933 "strip_size_kb": 0, 00:23:42.933 "state": "online", 00:23:42.933 "raid_level": "raid1", 00:23:42.933 "superblock": true, 00:23:42.933 "num_base_bdevs": 2, 00:23:42.933 "num_base_bdevs_discovered": 2, 00:23:42.933 "num_base_bdevs_operational": 2, 00:23:42.933 "process": { 00:23:42.933 "type": "rebuild", 00:23:42.933 "target": "spare", 00:23:42.933 "progress": { 00:23:42.933 "blocks": 14336, 00:23:42.933 "percent": 22 00:23:42.933 } 00:23:42.933 }, 00:23:42.933 "base_bdevs_list": [ 00:23:42.933 { 00:23:42.933 "name": "spare", 00:23:42.933 "uuid": "f958daaf-c697-58eb-a429-0eab347276c8", 00:23:42.933 "is_configured": true, 00:23:42.933 "data_offset": 2048, 00:23:42.933 "data_size": 63488 00:23:42.933 }, 00:23:42.933 { 00:23:42.933 "name": "BaseBdev2", 00:23:42.933 "uuid": "69411377-c3b6-5d43-8bc6-57184dc7547f", 00:23:42.933 "is_configured": true, 00:23:42.933 "data_offset": 2048, 00:23:42.933 "data_size": 63488 00:23:42.933 } 00:23:42.933 ] 00:23:42.933 }' 00:23:42.933 12:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:42.933 12:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:42.933 12:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:42.933 12:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:42.933 12:55:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:42.933 [2024-12-05 12:55:25.433373] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:42.933 [2024-12-05 12:55:25.433767] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:43.190 [2024-12-05 12:55:25.670685] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:43.447 [2024-12-05 12:55:25.883967] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:23:43.447 [2024-12-05 12:55:25.884184] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:23:43.704 134.25 IOPS, 402.75 MiB/s [2024-12-05T12:55:26.291Z] [2024-12-05 12:55:26.210610] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:23:43.962 12:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:43.962 12:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:43.962 12:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:43.962 12:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:43.962 12:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:43.962 12:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:43.962 12:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:43.962 12:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.962 12:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:43.962 12:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.962 12:55:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.962 12:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:43.962 "name": "raid_bdev1", 00:23:43.962 "uuid": "cabb9ce1-0364-4b7e-b4cb-ef002247ac40", 00:23:43.962 "strip_size_kb": 0, 00:23:43.962 "state": "online", 00:23:43.962 "raid_level": "raid1", 00:23:43.962 "superblock": true, 00:23:43.962 "num_base_bdevs": 2, 00:23:43.962 "num_base_bdevs_discovered": 2, 00:23:43.962 "num_base_bdevs_operational": 2, 00:23:43.962 "process": { 00:23:43.962 "type": "rebuild", 00:23:43.962 "target": "spare", 00:23:43.962 "progress": { 00:23:43.962 "blocks": 26624, 00:23:43.962 "percent": 41 00:23:43.962 } 00:23:43.962 }, 00:23:43.962 "base_bdevs_list": [ 00:23:43.962 { 00:23:43.962 "name": "spare", 00:23:43.962 "uuid": "f958daaf-c697-58eb-a429-0eab347276c8", 00:23:43.962 "is_configured": true, 00:23:43.962 "data_offset": 2048, 00:23:43.962 "data_size": 63488 00:23:43.962 }, 00:23:43.962 { 00:23:43.962 "name": "BaseBdev2", 00:23:43.962 "uuid": "69411377-c3b6-5d43-8bc6-57184dc7547f", 00:23:43.962 "is_configured": true, 00:23:43.962 "data_offset": 2048, 00:23:43.962 "data_size": 63488 00:23:43.962 } 00:23:43.962 ] 00:23:43.962 }' 00:23:43.962 12:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:43.962 [2024-12-05 12:55:26.422933] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:23:43.962 12:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:43.962 12:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:43.962 12:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:43.962 12:55:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:44.219 [2024-12-05 12:55:26.647303] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:23:44.219 [2024-12-05 12:55:26.775689] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:23:45.040 120.40 IOPS, 361.20 MiB/s [2024-12-05T12:55:27.627Z] [2024-12-05 12:55:27.428221] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:23:45.040 12:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:45.040 12:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:45.040 12:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:45.040 12:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:45.040 12:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:45.040 12:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:45.040 12:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:45.040 12:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.040 12:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:45.040 12:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.040 12:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.040 12:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:45.040 "name": "raid_bdev1", 00:23:45.040 "uuid": "cabb9ce1-0364-4b7e-b4cb-ef002247ac40", 00:23:45.040 "strip_size_kb": 0, 00:23:45.040 "state": "online", 00:23:45.040 "raid_level": "raid1", 00:23:45.040 "superblock": true, 00:23:45.040 "num_base_bdevs": 2, 00:23:45.040 "num_base_bdevs_discovered": 2, 00:23:45.040 "num_base_bdevs_operational": 2, 00:23:45.040 "process": { 00:23:45.040 "type": "rebuild", 00:23:45.040 "target": "spare", 00:23:45.040 "progress": { 00:23:45.040 "blocks": 45056, 00:23:45.040 "percent": 70 00:23:45.040 } 00:23:45.040 }, 00:23:45.040 "base_bdevs_list": [ 00:23:45.040 { 00:23:45.040 "name": "spare", 00:23:45.040 "uuid": "f958daaf-c697-58eb-a429-0eab347276c8", 00:23:45.040 "is_configured": true, 00:23:45.040 "data_offset": 2048, 00:23:45.040 "data_size": 63488 00:23:45.040 }, 00:23:45.040 { 00:23:45.040 "name": "BaseBdev2", 00:23:45.040 "uuid": "69411377-c3b6-5d43-8bc6-57184dc7547f", 00:23:45.040 "is_configured": true, 00:23:45.040 "data_offset": 2048, 00:23:45.040 "data_size": 63488 00:23:45.040 } 00:23:45.040 ] 00:23:45.040 }' 00:23:45.040 12:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:45.040 12:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:45.040 12:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:45.040 12:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:45.040 12:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:45.298 [2024-12-05 12:55:27.767512] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:23:45.863 108.17 IOPS, 324.50 MiB/s [2024-12-05T12:55:28.450Z] [2024-12-05 12:55:28.413378] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:46.122 [2024-12-05 12:55:28.518317] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:46.122 [2024-12-05 12:55:28.519860] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:46.122 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:46.122 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:46.122 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:46.122 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:46.122 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:46.122 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:46.122 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:46.122 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:46.122 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.122 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:46.122 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.122 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:46.122 "name": "raid_bdev1", 00:23:46.122 "uuid": "cabb9ce1-0364-4b7e-b4cb-ef002247ac40", 00:23:46.122 "strip_size_kb": 0, 00:23:46.122 "state": "online", 00:23:46.122 "raid_level": "raid1", 00:23:46.122 "superblock": true, 00:23:46.122 "num_base_bdevs": 2, 00:23:46.122 "num_base_bdevs_discovered": 2, 00:23:46.122 "num_base_bdevs_operational": 2, 00:23:46.122 "base_bdevs_list": [ 00:23:46.122 { 00:23:46.122 "name": "spare", 00:23:46.122 "uuid": "f958daaf-c697-58eb-a429-0eab347276c8", 00:23:46.122 "is_configured": true, 00:23:46.122 "data_offset": 2048, 00:23:46.122 "data_size": 63488 00:23:46.122 }, 00:23:46.122 { 00:23:46.122 "name": "BaseBdev2", 00:23:46.122 "uuid": "69411377-c3b6-5d43-8bc6-57184dc7547f", 00:23:46.122 "is_configured": true, 00:23:46.122 "data_offset": 2048, 00:23:46.122 "data_size": 63488 00:23:46.122 } 00:23:46.122 ] 00:23:46.122 }' 00:23:46.122 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:46.122 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:46.122 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:46.122 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:23:46.122 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:23:46.122 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:46.122 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:46.122 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:46.122 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:46.122 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:46.122 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:46.122 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:46.122 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.122 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:46.122 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.122 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:46.122 "name": "raid_bdev1", 00:23:46.122 "uuid": "cabb9ce1-0364-4b7e-b4cb-ef002247ac40", 00:23:46.122 "strip_size_kb": 0, 00:23:46.122 "state": "online", 00:23:46.122 "raid_level": "raid1", 00:23:46.122 "superblock": true, 00:23:46.122 "num_base_bdevs": 2, 00:23:46.122 "num_base_bdevs_discovered": 2, 00:23:46.122 "num_base_bdevs_operational": 2, 00:23:46.122 "base_bdevs_list": [ 00:23:46.122 { 00:23:46.122 "name": "spare", 00:23:46.122 "uuid": "f958daaf-c697-58eb-a429-0eab347276c8", 00:23:46.122 "is_configured": true, 00:23:46.122 "data_offset": 2048, 00:23:46.122 "data_size": 63488 00:23:46.122 }, 00:23:46.122 { 00:23:46.122 "name": "BaseBdev2", 00:23:46.122 "uuid": "69411377-c3b6-5d43-8bc6-57184dc7547f", 00:23:46.122 "is_configured": true, 00:23:46.122 "data_offset": 2048, 00:23:46.122 "data_size": 63488 00:23:46.122 } 00:23:46.122 ] 00:23:46.122 }' 00:23:46.122 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:46.380 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:46.380 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:46.380 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:46.380 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:46.380 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:46.380 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:46.380 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:46.380 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:46.380 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:46.380 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:46.380 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:46.380 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:46.380 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:46.380 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:46.380 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.380 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:46.380 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:46.380 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.380 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:46.380 "name": "raid_bdev1", 00:23:46.380 "uuid": "cabb9ce1-0364-4b7e-b4cb-ef002247ac40", 00:23:46.380 "strip_size_kb": 0, 00:23:46.380 "state": "online", 00:23:46.380 "raid_level": "raid1", 00:23:46.380 "superblock": true, 00:23:46.380 "num_base_bdevs": 2, 00:23:46.380 "num_base_bdevs_discovered": 2, 00:23:46.380 "num_base_bdevs_operational": 2, 00:23:46.380 "base_bdevs_list": [ 00:23:46.380 { 00:23:46.380 "name": "spare", 00:23:46.380 "uuid": "f958daaf-c697-58eb-a429-0eab347276c8", 00:23:46.380 "is_configured": true, 00:23:46.380 "data_offset": 2048, 00:23:46.380 "data_size": 63488 00:23:46.380 }, 00:23:46.380 { 00:23:46.380 "name": "BaseBdev2", 00:23:46.380 "uuid": "69411377-c3b6-5d43-8bc6-57184dc7547f", 00:23:46.380 "is_configured": true, 00:23:46.380 "data_offset": 2048, 00:23:46.380 "data_size": 63488 00:23:46.380 } 00:23:46.380 ] 00:23:46.380 }' 00:23:46.380 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:46.380 12:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:46.640 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:46.640 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.640 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:46.640 [2024-12-05 12:55:29.068924] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:46.640 [2024-12-05 12:55:29.069041] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:46.640 98.43 IOPS, 295.29 MiB/s 00:23:46.640 Latency(us) 00:23:46.640 [2024-12-05T12:55:29.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.640 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:23:46.640 raid_bdev1 : 7.02 98.41 295.22 0.00 0.00 13537.74 255.21 112116.97 00:23:46.640 [2024-12-05T12:55:29.227Z] =================================================================================================================== 00:23:46.640 [2024-12-05T12:55:29.227Z] Total : 98.41 295.22 0.00 0.00 13537.74 255.21 112116.97 00:23:46.640 { 00:23:46.640 "results": [ 00:23:46.640 { 00:23:46.640 "job": "raid_bdev1", 00:23:46.640 "core_mask": "0x1", 00:23:46.640 "workload": "randrw", 00:23:46.640 "percentage": 50, 00:23:46.640 "status": "finished", 00:23:46.640 "queue_depth": 2, 00:23:46.640 "io_size": 3145728, 00:23:46.640 "runtime": 7.02186, 00:23:46.640 "iops": 98.40697479015532, 00:23:46.640 "mibps": 295.2209243704659, 00:23:46.640 "io_failed": 0, 00:23:46.640 "io_timeout": 0, 00:23:46.640 "avg_latency_us": 13537.738252254258, 00:23:46.640 "min_latency_us": 255.2123076923077, 00:23:46.640 "max_latency_us": 112116.97230769231 00:23:46.640 } 00:23:46.640 ], 00:23:46.640 "core_count": 1 00:23:46.640 } 00:23:46.640 [2024-12-05 12:55:29.172601] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:46.640 [2024-12-05 12:55:29.172647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:46.640 [2024-12-05 12:55:29.172713] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:46.640 [2024-12-05 12:55:29.172721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:46.640 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.640 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:46.640 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:23:46.640 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.640 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:46.640 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.640 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:23:46.640 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:23:46.640 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:23:46.640 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:23:46.640 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:46.640 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:23:46.640 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:46.640 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:46.640 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:46.640 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:23:46.640 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:46.640 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:46.640 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:23:46.898 /dev/nbd0 00:23:46.898 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:46.898 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:46.898 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:46.898 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:23:46.898 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:46.898 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:46.898 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:46.898 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:23:46.898 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:46.898 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:46.898 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:46.898 1+0 records in 00:23:46.898 1+0 records out 00:23:46.898 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000561242 s, 7.3 MB/s 00:23:46.898 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:46.898 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:23:46.898 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:46.898 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:46.898 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:23:46.899 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:46.899 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:46.899 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:23:46.899 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:23:46.899 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:23:46.899 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:46.899 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:23:46.899 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:46.899 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:23:46.899 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:46.899 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:23:46.899 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:46.899 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:46.899 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:23:47.156 /dev/nbd1 00:23:47.156 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:47.156 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:47.156 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:23:47.156 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:23:47.156 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:47.156 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:47.156 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:23:47.156 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:23:47.156 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:47.156 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:47.156 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:47.156 1+0 records in 00:23:47.156 1+0 records out 00:23:47.156 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306505 s, 13.4 MB/s 00:23:47.156 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:47.156 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:23:47.156 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:47.156 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:47.156 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:23:47.156 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:47.156 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:47.156 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:47.414 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:23:47.414 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:47.414 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:23:47.414 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:47.414 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:23:47.414 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:47.414 12:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:23:47.672 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:47.672 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:47.672 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:47.672 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:47.672 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:47.672 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:47.672 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:23:47.672 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:23:47.672 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:23:47.672 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:47.672 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:47.672 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:47.672 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:23:47.672 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:47.672 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:47.930 [2024-12-05 12:55:30.299576] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:47.930 [2024-12-05 12:55:30.299716] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:47.930 [2024-12-05 12:55:30.299744] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:23:47.930 [2024-12-05 12:55:30.299752] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:47.930 [2024-12-05 12:55:30.301577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:47.930 [2024-12-05 12:55:30.301606] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:47.930 [2024-12-05 12:55:30.301681] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:47.930 [2024-12-05 12:55:30.301718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:47.930 [2024-12-05 12:55:30.301822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:47.930 spare 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:47.930 [2024-12-05 12:55:30.401898] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:23:47.930 [2024-12-05 12:55:30.401933] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:47.930 [2024-12-05 12:55:30.402194] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:23:47.930 [2024-12-05 12:55:30.402336] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:23:47.930 [2024-12-05 12:55:30.402345] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:23:47.930 [2024-12-05 12:55:30.402481] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:47.930 "name": "raid_bdev1", 00:23:47.930 "uuid": "cabb9ce1-0364-4b7e-b4cb-ef002247ac40", 00:23:47.930 "strip_size_kb": 0, 00:23:47.930 "state": "online", 00:23:47.930 "raid_level": "raid1", 00:23:47.930 "superblock": true, 00:23:47.930 "num_base_bdevs": 2, 00:23:47.930 "num_base_bdevs_discovered": 2, 00:23:47.930 "num_base_bdevs_operational": 2, 00:23:47.930 "base_bdevs_list": [ 00:23:47.930 { 00:23:47.930 "name": "spare", 00:23:47.930 "uuid": "f958daaf-c697-58eb-a429-0eab347276c8", 00:23:47.930 "is_configured": true, 00:23:47.930 "data_offset": 2048, 00:23:47.930 "data_size": 63488 00:23:47.930 }, 00:23:47.930 { 00:23:47.930 "name": "BaseBdev2", 00:23:47.930 "uuid": "69411377-c3b6-5d43-8bc6-57184dc7547f", 00:23:47.930 "is_configured": true, 00:23:47.930 "data_offset": 2048, 00:23:47.930 "data_size": 63488 00:23:47.930 } 00:23:47.930 ] 00:23:47.930 }' 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:47.930 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:48.188 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:48.188 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:48.188 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:48.188 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:48.188 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:48.188 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:48.188 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.188 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:48.188 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:48.188 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.189 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:48.189 "name": "raid_bdev1", 00:23:48.189 "uuid": "cabb9ce1-0364-4b7e-b4cb-ef002247ac40", 00:23:48.189 "strip_size_kb": 0, 00:23:48.189 "state": "online", 00:23:48.189 "raid_level": "raid1", 00:23:48.189 "superblock": true, 00:23:48.189 "num_base_bdevs": 2, 00:23:48.189 "num_base_bdevs_discovered": 2, 00:23:48.189 "num_base_bdevs_operational": 2, 00:23:48.189 "base_bdevs_list": [ 00:23:48.189 { 00:23:48.189 "name": "spare", 00:23:48.189 "uuid": "f958daaf-c697-58eb-a429-0eab347276c8", 00:23:48.189 "is_configured": true, 00:23:48.189 "data_offset": 2048, 00:23:48.189 "data_size": 63488 00:23:48.189 }, 00:23:48.189 { 00:23:48.189 "name": "BaseBdev2", 00:23:48.189 "uuid": "69411377-c3b6-5d43-8bc6-57184dc7547f", 00:23:48.189 "is_configured": true, 00:23:48.189 "data_offset": 2048, 00:23:48.189 "data_size": 63488 00:23:48.189 } 00:23:48.189 ] 00:23:48.189 }' 00:23:48.189 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:48.446 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:48.446 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:48.446 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:48.446 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:48.446 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:48.446 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.446 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:48.446 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.446 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:23:48.446 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:48.446 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.446 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:48.446 [2024-12-05 12:55:30.851794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:48.446 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.446 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:48.446 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:48.446 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:48.446 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:48.447 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:48.447 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:48.447 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:48.447 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:48.447 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:48.447 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:48.447 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:48.447 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.447 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:48.447 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:48.447 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.447 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:48.447 "name": "raid_bdev1", 00:23:48.447 "uuid": "cabb9ce1-0364-4b7e-b4cb-ef002247ac40", 00:23:48.447 "strip_size_kb": 0, 00:23:48.447 "state": "online", 00:23:48.447 "raid_level": "raid1", 00:23:48.447 "superblock": true, 00:23:48.447 "num_base_bdevs": 2, 00:23:48.447 "num_base_bdevs_discovered": 1, 00:23:48.447 "num_base_bdevs_operational": 1, 00:23:48.447 "base_bdevs_list": [ 00:23:48.447 { 00:23:48.447 "name": null, 00:23:48.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:48.447 "is_configured": false, 00:23:48.447 "data_offset": 0, 00:23:48.447 "data_size": 63488 00:23:48.447 }, 00:23:48.447 { 00:23:48.447 "name": "BaseBdev2", 00:23:48.447 "uuid": "69411377-c3b6-5d43-8bc6-57184dc7547f", 00:23:48.447 "is_configured": true, 00:23:48.447 "data_offset": 2048, 00:23:48.447 "data_size": 63488 00:23:48.447 } 00:23:48.447 ] 00:23:48.447 }' 00:23:48.447 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:48.447 12:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:48.704 12:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:48.704 12:55:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.704 12:55:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:48.704 [2024-12-05 12:55:31.167899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:48.704 [2024-12-05 12:55:31.168066] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:48.704 [2024-12-05 12:55:31.168080] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:48.704 [2024-12-05 12:55:31.168111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:48.704 [2024-12-05 12:55:31.177387] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:23:48.704 12:55:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.704 12:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:23:48.704 [2024-12-05 12:55:31.178924] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:49.673 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:49.673 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:49.673 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:49.673 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:49.673 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:49.673 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:49.674 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.674 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.674 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:49.674 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.674 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:49.674 "name": "raid_bdev1", 00:23:49.674 "uuid": "cabb9ce1-0364-4b7e-b4cb-ef002247ac40", 00:23:49.674 "strip_size_kb": 0, 00:23:49.674 "state": "online", 00:23:49.674 "raid_level": "raid1", 00:23:49.674 "superblock": true, 00:23:49.674 "num_base_bdevs": 2, 00:23:49.674 "num_base_bdevs_discovered": 2, 00:23:49.674 "num_base_bdevs_operational": 2, 00:23:49.674 "process": { 00:23:49.674 "type": "rebuild", 00:23:49.674 "target": "spare", 00:23:49.674 "progress": { 00:23:49.674 "blocks": 20480, 00:23:49.674 "percent": 32 00:23:49.674 } 00:23:49.674 }, 00:23:49.674 "base_bdevs_list": [ 00:23:49.674 { 00:23:49.674 "name": "spare", 00:23:49.674 "uuid": "f958daaf-c697-58eb-a429-0eab347276c8", 00:23:49.674 "is_configured": true, 00:23:49.674 "data_offset": 2048, 00:23:49.674 "data_size": 63488 00:23:49.674 }, 00:23:49.674 { 00:23:49.674 "name": "BaseBdev2", 00:23:49.674 "uuid": "69411377-c3b6-5d43-8bc6-57184dc7547f", 00:23:49.674 "is_configured": true, 00:23:49.674 "data_offset": 2048, 00:23:49.674 "data_size": 63488 00:23:49.674 } 00:23:49.674 ] 00:23:49.674 }' 00:23:49.674 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:49.674 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:49.674 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:49.931 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:49.931 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:23:49.931 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.931 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:49.931 [2024-12-05 12:55:32.293316] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:49.931 [2024-12-05 12:55:32.384232] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:49.931 [2024-12-05 12:55:32.384299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:49.931 [2024-12-05 12:55:32.384311] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:49.931 [2024-12-05 12:55:32.384318] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:49.931 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.931 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:49.931 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:49.931 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:49.931 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:49.931 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:49.931 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:49.931 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:49.931 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:49.931 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:49.931 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:49.931 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:49.931 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.931 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:49.931 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.931 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.931 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:49.931 "name": "raid_bdev1", 00:23:49.931 "uuid": "cabb9ce1-0364-4b7e-b4cb-ef002247ac40", 00:23:49.931 "strip_size_kb": 0, 00:23:49.931 "state": "online", 00:23:49.931 "raid_level": "raid1", 00:23:49.931 "superblock": true, 00:23:49.931 "num_base_bdevs": 2, 00:23:49.931 "num_base_bdevs_discovered": 1, 00:23:49.931 "num_base_bdevs_operational": 1, 00:23:49.931 "base_bdevs_list": [ 00:23:49.931 { 00:23:49.931 "name": null, 00:23:49.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:49.931 "is_configured": false, 00:23:49.931 "data_offset": 0, 00:23:49.931 "data_size": 63488 00:23:49.931 }, 00:23:49.931 { 00:23:49.931 "name": "BaseBdev2", 00:23:49.931 "uuid": "69411377-c3b6-5d43-8bc6-57184dc7547f", 00:23:49.931 "is_configured": true, 00:23:49.931 "data_offset": 2048, 00:23:49.931 "data_size": 63488 00:23:49.931 } 00:23:49.931 ] 00:23:49.931 }' 00:23:49.931 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:49.931 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:50.187 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:50.187 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.187 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:50.187 [2024-12-05 12:55:32.712291] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:50.187 [2024-12-05 12:55:32.712349] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:50.187 [2024-12-05 12:55:32.712367] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:23:50.187 [2024-12-05 12:55:32.712376] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:50.187 [2024-12-05 12:55:32.712759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:50.187 [2024-12-05 12:55:32.712777] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:50.187 [2024-12-05 12:55:32.712854] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:50.187 [2024-12-05 12:55:32.712865] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:50.187 [2024-12-05 12:55:32.712873] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:50.187 [2024-12-05 12:55:32.712893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:50.187 [2024-12-05 12:55:32.722395] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:23:50.187 spare 00:23:50.187 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.187 12:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:23:50.187 [2024-12-05 12:55:32.723993] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:51.557 12:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:51.557 12:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:51.557 12:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:51.557 12:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:51.557 12:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:51.557 12:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:51.557 12:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.557 12:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:51.557 12:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.557 12:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.557 12:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:51.557 "name": "raid_bdev1", 00:23:51.557 "uuid": "cabb9ce1-0364-4b7e-b4cb-ef002247ac40", 00:23:51.557 "strip_size_kb": 0, 00:23:51.557 "state": "online", 00:23:51.557 "raid_level": "raid1", 00:23:51.557 "superblock": true, 00:23:51.557 "num_base_bdevs": 2, 00:23:51.557 "num_base_bdevs_discovered": 2, 00:23:51.557 "num_base_bdevs_operational": 2, 00:23:51.557 "process": { 00:23:51.557 "type": "rebuild", 00:23:51.557 "target": "spare", 00:23:51.557 "progress": { 00:23:51.557 "blocks": 20480, 00:23:51.557 "percent": 32 00:23:51.557 } 00:23:51.557 }, 00:23:51.557 "base_bdevs_list": [ 00:23:51.557 { 00:23:51.557 "name": "spare", 00:23:51.557 "uuid": "f958daaf-c697-58eb-a429-0eab347276c8", 00:23:51.557 "is_configured": true, 00:23:51.557 "data_offset": 2048, 00:23:51.557 "data_size": 63488 00:23:51.557 }, 00:23:51.557 { 00:23:51.557 "name": "BaseBdev2", 00:23:51.557 "uuid": "69411377-c3b6-5d43-8bc6-57184dc7547f", 00:23:51.557 "is_configured": true, 00:23:51.557 "data_offset": 2048, 00:23:51.557 "data_size": 63488 00:23:51.557 } 00:23:51.557 ] 00:23:51.557 }' 00:23:51.557 12:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:51.557 12:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:51.557 12:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:51.557 12:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:51.557 12:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:23:51.557 12:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.557 12:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:51.557 [2024-12-05 12:55:33.830289] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:51.557 [2024-12-05 12:55:33.929315] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:51.557 [2024-12-05 12:55:33.929509] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:51.557 [2024-12-05 12:55:33.929569] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:51.557 [2024-12-05 12:55:33.929591] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:51.557 12:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.557 12:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:51.557 12:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:51.557 12:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:51.557 12:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:51.557 12:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:51.557 12:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:51.557 12:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:51.557 12:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:51.557 12:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:51.557 12:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:51.557 12:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:51.557 12:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.557 12:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.557 12:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:51.557 12:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.557 12:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:51.557 "name": "raid_bdev1", 00:23:51.557 "uuid": "cabb9ce1-0364-4b7e-b4cb-ef002247ac40", 00:23:51.557 "strip_size_kb": 0, 00:23:51.557 "state": "online", 00:23:51.557 "raid_level": "raid1", 00:23:51.557 "superblock": true, 00:23:51.557 "num_base_bdevs": 2, 00:23:51.557 "num_base_bdevs_discovered": 1, 00:23:51.557 "num_base_bdevs_operational": 1, 00:23:51.557 "base_bdevs_list": [ 00:23:51.557 { 00:23:51.557 "name": null, 00:23:51.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.557 "is_configured": false, 00:23:51.557 "data_offset": 0, 00:23:51.557 "data_size": 63488 00:23:51.557 }, 00:23:51.557 { 00:23:51.557 "name": "BaseBdev2", 00:23:51.557 "uuid": "69411377-c3b6-5d43-8bc6-57184dc7547f", 00:23:51.557 "is_configured": true, 00:23:51.557 "data_offset": 2048, 00:23:51.557 "data_size": 63488 00:23:51.557 } 00:23:51.557 ] 00:23:51.557 }' 00:23:51.557 12:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:51.557 12:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:51.815 12:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:51.815 12:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:51.815 12:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:51.815 12:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:51.815 12:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:51.815 12:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.815 12:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:51.815 12:55:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.815 12:55:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:51.815 12:55:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.815 12:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:51.815 "name": "raid_bdev1", 00:23:51.815 "uuid": "cabb9ce1-0364-4b7e-b4cb-ef002247ac40", 00:23:51.815 "strip_size_kb": 0, 00:23:51.815 "state": "online", 00:23:51.815 "raid_level": "raid1", 00:23:51.815 "superblock": true, 00:23:51.815 "num_base_bdevs": 2, 00:23:51.815 "num_base_bdevs_discovered": 1, 00:23:51.815 "num_base_bdevs_operational": 1, 00:23:51.815 "base_bdevs_list": [ 00:23:51.815 { 00:23:51.815 "name": null, 00:23:51.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.815 "is_configured": false, 00:23:51.815 "data_offset": 0, 00:23:51.815 "data_size": 63488 00:23:51.815 }, 00:23:51.815 { 00:23:51.815 "name": "BaseBdev2", 00:23:51.815 "uuid": "69411377-c3b6-5d43-8bc6-57184dc7547f", 00:23:51.815 "is_configured": true, 00:23:51.815 "data_offset": 2048, 00:23:51.815 "data_size": 63488 00:23:51.815 } 00:23:51.815 ] 00:23:51.815 }' 00:23:51.815 12:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:51.815 12:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:51.815 12:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:51.815 12:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:51.815 12:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:23:51.815 12:55:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.815 12:55:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:51.815 12:55:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.815 12:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:51.815 12:55:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.815 12:55:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:51.815 [2024-12-05 12:55:34.369737] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:51.815 [2024-12-05 12:55:34.369784] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:51.815 [2024-12-05 12:55:34.369805] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:23:51.815 [2024-12-05 12:55:34.369813] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:51.815 [2024-12-05 12:55:34.370157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:51.815 [2024-12-05 12:55:34.370167] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:51.815 [2024-12-05 12:55:34.370227] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:51.815 [2024-12-05 12:55:34.370238] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:51.815 [2024-12-05 12:55:34.370247] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:51.815 [2024-12-05 12:55:34.370255] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:23:51.815 BaseBdev1 00:23:51.815 12:55:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.815 12:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:23:53.191 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:53.191 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:53.191 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:53.191 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:53.191 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:53.191 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:53.191 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:53.191 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:53.191 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:53.191 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:53.191 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:53.191 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.191 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:53.191 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:53.191 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.191 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:53.191 "name": "raid_bdev1", 00:23:53.191 "uuid": "cabb9ce1-0364-4b7e-b4cb-ef002247ac40", 00:23:53.191 "strip_size_kb": 0, 00:23:53.191 "state": "online", 00:23:53.191 "raid_level": "raid1", 00:23:53.191 "superblock": true, 00:23:53.191 "num_base_bdevs": 2, 00:23:53.191 "num_base_bdevs_discovered": 1, 00:23:53.191 "num_base_bdevs_operational": 1, 00:23:53.191 "base_bdevs_list": [ 00:23:53.191 { 00:23:53.191 "name": null, 00:23:53.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:53.191 "is_configured": false, 00:23:53.191 "data_offset": 0, 00:23:53.191 "data_size": 63488 00:23:53.191 }, 00:23:53.191 { 00:23:53.191 "name": "BaseBdev2", 00:23:53.191 "uuid": "69411377-c3b6-5d43-8bc6-57184dc7547f", 00:23:53.191 "is_configured": true, 00:23:53.191 "data_offset": 2048, 00:23:53.191 "data_size": 63488 00:23:53.191 } 00:23:53.191 ] 00:23:53.191 }' 00:23:53.191 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:53.191 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:53.191 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:53.191 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:53.191 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:53.191 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:53.191 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:53.191 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:53.191 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:53.191 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.191 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:53.191 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.191 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:53.191 "name": "raid_bdev1", 00:23:53.191 "uuid": "cabb9ce1-0364-4b7e-b4cb-ef002247ac40", 00:23:53.191 "strip_size_kb": 0, 00:23:53.191 "state": "online", 00:23:53.191 "raid_level": "raid1", 00:23:53.191 "superblock": true, 00:23:53.191 "num_base_bdevs": 2, 00:23:53.191 "num_base_bdevs_discovered": 1, 00:23:53.191 "num_base_bdevs_operational": 1, 00:23:53.191 "base_bdevs_list": [ 00:23:53.191 { 00:23:53.191 "name": null, 00:23:53.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:53.191 "is_configured": false, 00:23:53.191 "data_offset": 0, 00:23:53.191 "data_size": 63488 00:23:53.191 }, 00:23:53.191 { 00:23:53.191 "name": "BaseBdev2", 00:23:53.191 "uuid": "69411377-c3b6-5d43-8bc6-57184dc7547f", 00:23:53.191 "is_configured": true, 00:23:53.191 "data_offset": 2048, 00:23:53.191 "data_size": 63488 00:23:53.191 } 00:23:53.191 ] 00:23:53.191 }' 00:23:53.191 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:53.192 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:53.192 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:53.192 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:53.192 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:53.192 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:23:53.192 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:53.192 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:53.192 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:53.192 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:53.449 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:53.449 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:53.449 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.449 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:53.449 [2024-12-05 12:55:35.778172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:53.449 [2024-12-05 12:55:35.778383] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:53.449 [2024-12-05 12:55:35.778404] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:53.449 request: 00:23:53.449 { 00:23:53.449 "base_bdev": "BaseBdev1", 00:23:53.449 "raid_bdev": "raid_bdev1", 00:23:53.449 "method": "bdev_raid_add_base_bdev", 00:23:53.449 "req_id": 1 00:23:53.449 } 00:23:53.449 Got JSON-RPC error response 00:23:53.449 response: 00:23:53.449 { 00:23:53.449 "code": -22, 00:23:53.449 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:53.449 } 00:23:53.449 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:53.449 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:23:53.449 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:53.449 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:53.449 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:53.449 12:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:23:54.381 12:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:54.381 12:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:54.381 12:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:54.381 12:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:54.381 12:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:54.381 12:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:54.381 12:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:54.381 12:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:54.381 12:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:54.381 12:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:54.381 12:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:54.381 12:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:54.381 12:55:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.381 12:55:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:54.381 12:55:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.381 12:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:54.381 "name": "raid_bdev1", 00:23:54.381 "uuid": "cabb9ce1-0364-4b7e-b4cb-ef002247ac40", 00:23:54.381 "strip_size_kb": 0, 00:23:54.381 "state": "online", 00:23:54.381 "raid_level": "raid1", 00:23:54.381 "superblock": true, 00:23:54.381 "num_base_bdevs": 2, 00:23:54.381 "num_base_bdevs_discovered": 1, 00:23:54.381 "num_base_bdevs_operational": 1, 00:23:54.381 "base_bdevs_list": [ 00:23:54.381 { 00:23:54.381 "name": null, 00:23:54.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:54.381 "is_configured": false, 00:23:54.381 "data_offset": 0, 00:23:54.381 "data_size": 63488 00:23:54.381 }, 00:23:54.381 { 00:23:54.381 "name": "BaseBdev2", 00:23:54.381 "uuid": "69411377-c3b6-5d43-8bc6-57184dc7547f", 00:23:54.381 "is_configured": true, 00:23:54.381 "data_offset": 2048, 00:23:54.381 "data_size": 63488 00:23:54.381 } 00:23:54.381 ] 00:23:54.381 }' 00:23:54.381 12:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:54.381 12:55:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:54.639 12:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:54.639 12:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:54.639 12:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:54.639 12:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:54.639 12:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:54.639 12:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:54.639 12:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:54.639 12:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.639 12:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:54.639 12:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.639 12:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:54.639 "name": "raid_bdev1", 00:23:54.639 "uuid": "cabb9ce1-0364-4b7e-b4cb-ef002247ac40", 00:23:54.639 "strip_size_kb": 0, 00:23:54.639 "state": "online", 00:23:54.639 "raid_level": "raid1", 00:23:54.639 "superblock": true, 00:23:54.639 "num_base_bdevs": 2, 00:23:54.639 "num_base_bdevs_discovered": 1, 00:23:54.639 "num_base_bdevs_operational": 1, 00:23:54.639 "base_bdevs_list": [ 00:23:54.639 { 00:23:54.639 "name": null, 00:23:54.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:54.639 "is_configured": false, 00:23:54.639 "data_offset": 0, 00:23:54.639 "data_size": 63488 00:23:54.639 }, 00:23:54.639 { 00:23:54.639 "name": "BaseBdev2", 00:23:54.639 "uuid": "69411377-c3b6-5d43-8bc6-57184dc7547f", 00:23:54.639 "is_configured": true, 00:23:54.639 "data_offset": 2048, 00:23:54.639 "data_size": 63488 00:23:54.639 } 00:23:54.639 ] 00:23:54.639 }' 00:23:54.639 12:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:54.639 12:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:54.639 12:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:54.639 12:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:54.639 12:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 74550 00:23:54.639 12:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 74550 ']' 00:23:54.639 12:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 74550 00:23:54.639 12:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:23:54.639 12:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:54.639 12:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74550 00:23:54.639 killing process with pid 74550 00:23:54.639 Received shutdown signal, test time was about 15.077175 seconds 00:23:54.639 00:23:54.639 Latency(us) 00:23:54.639 [2024-12-05T12:55:37.226Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.639 [2024-12-05T12:55:37.226Z] =================================================================================================================== 00:23:54.639 [2024-12-05T12:55:37.226Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:54.639 12:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:54.639 12:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:54.639 12:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74550' 00:23:54.639 12:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 74550 00:23:54.639 [2024-12-05 12:55:37.216257] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:54.639 12:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 74550 00:23:54.639 [2024-12-05 12:55:37.216354] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:54.639 [2024-12-05 12:55:37.216395] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:54.639 [2024-12-05 12:55:37.216404] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:23:54.896 [2024-12-05 12:55:37.327861] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:55.460 ************************************ 00:23:55.460 END TEST raid_rebuild_test_sb_io 00:23:55.460 ************************************ 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:23:55.460 00:23:55.460 real 0m17.307s 00:23:55.460 user 0m21.977s 00:23:55.460 sys 0m1.506s 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:55.460 12:55:37 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:23:55.460 12:55:37 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:23:55.460 12:55:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:23:55.460 12:55:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:55.460 12:55:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:55.460 ************************************ 00:23:55.460 START TEST raid_rebuild_test 00:23:55.460 ************************************ 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75211 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75211 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75211 ']' 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.460 12:55:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:55.460 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:55.460 Zero copy mechanism will not be used. 00:23:55.460 [2024-12-05 12:55:38.029127] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:23:55.460 [2024-12-05 12:55:38.029250] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75211 ] 00:23:55.716 [2024-12-05 12:55:38.185901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.716 [2024-12-05 12:55:38.269545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.973 [2024-12-05 12:55:38.378140] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:55.973 [2024-12-05 12:55:38.378182] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:56.540 12:55:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:56.540 12:55:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:23:56.540 12:55:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:56.540 12:55:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:56.540 12:55:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.540 12:55:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.540 BaseBdev1_malloc 00:23:56.540 12:55:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.540 12:55:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:56.540 12:55:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.540 12:55:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.540 [2024-12-05 12:55:38.891618] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:56.540 [2024-12-05 12:55:38.891669] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.540 [2024-12-05 12:55:38.891687] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:56.540 [2024-12-05 12:55:38.891696] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.540 [2024-12-05 12:55:38.893411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.540 [2024-12-05 12:55:38.893552] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:56.540 BaseBdev1 00:23:56.540 12:55:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.540 12:55:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:56.540 12:55:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:56.540 12:55:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.540 12:55:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.540 BaseBdev2_malloc 00:23:56.540 12:55:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.540 12:55:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:56.540 12:55:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.540 12:55:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.540 [2024-12-05 12:55:38.923016] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:56.540 [2024-12-05 12:55:38.923062] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.540 [2024-12-05 12:55:38.923080] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:56.540 [2024-12-05 12:55:38.923088] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.540 [2024-12-05 12:55:38.924835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.540 [2024-12-05 12:55:38.924949] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:56.540 BaseBdev2 00:23:56.540 12:55:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.540 12:55:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:56.540 12:55:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:56.540 12:55:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.540 12:55:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.540 BaseBdev3_malloc 00:23:56.540 12:55:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.540 12:55:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:56.540 12:55:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.540 12:55:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.540 [2024-12-05 12:55:38.974464] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:56.540 [2024-12-05 12:55:38.974524] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.540 [2024-12-05 12:55:38.974542] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:56.540 [2024-12-05 12:55:38.974563] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.540 [2024-12-05 12:55:38.976251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.540 [2024-12-05 12:55:38.976284] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:56.540 BaseBdev3 00:23:56.540 12:55:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.540 12:55:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:56.540 12:55:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:56.540 12:55:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.540 12:55:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.540 BaseBdev4_malloc 00:23:56.540 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.540 12:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:56.540 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.540 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.540 [2024-12-05 12:55:39.005762] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:56.540 [2024-12-05 12:55:39.005895] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.540 [2024-12-05 12:55:39.005913] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:56.540 [2024-12-05 12:55:39.005922] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.540 [2024-12-05 12:55:39.007586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.540 [2024-12-05 12:55:39.007615] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:56.540 BaseBdev4 00:23:56.540 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.540 12:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:23:56.540 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.540 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.540 spare_malloc 00:23:56.540 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.540 12:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:56.540 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.540 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.540 spare_delay 00:23:56.540 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.540 12:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:56.540 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.540 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.540 [2024-12-05 12:55:39.045045] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:56.540 [2024-12-05 12:55:39.045177] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.540 [2024-12-05 12:55:39.045195] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:56.540 [2024-12-05 12:55:39.045203] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.540 [2024-12-05 12:55:39.046925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.540 [2024-12-05 12:55:39.046956] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:56.540 spare 00:23:56.540 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.540 12:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:23:56.540 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.540 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.540 [2024-12-05 12:55:39.053087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:56.540 [2024-12-05 12:55:39.054623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:56.540 [2024-12-05 12:55:39.054671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:56.540 [2024-12-05 12:55:39.054713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:56.540 [2024-12-05 12:55:39.054778] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:56.540 [2024-12-05 12:55:39.054789] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:23:56.540 [2024-12-05 12:55:39.055003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:56.540 [2024-12-05 12:55:39.055129] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:56.540 [2024-12-05 12:55:39.055137] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:56.540 [2024-12-05 12:55:39.055251] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:56.540 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.540 12:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:56.540 12:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:56.540 12:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:56.541 12:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:56.541 12:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:56.541 12:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:56.541 12:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:56.541 12:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:56.541 12:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:56.541 12:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:56.541 12:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:56.541 12:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:56.541 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.541 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.541 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.541 12:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:56.541 "name": "raid_bdev1", 00:23:56.541 "uuid": "9df31d42-218b-4f1c-a3d4-f306b984d69b", 00:23:56.541 "strip_size_kb": 0, 00:23:56.541 "state": "online", 00:23:56.541 "raid_level": "raid1", 00:23:56.541 "superblock": false, 00:23:56.541 "num_base_bdevs": 4, 00:23:56.541 "num_base_bdevs_discovered": 4, 00:23:56.541 "num_base_bdevs_operational": 4, 00:23:56.541 "base_bdevs_list": [ 00:23:56.541 { 00:23:56.541 "name": "BaseBdev1", 00:23:56.541 "uuid": "16811241-d14f-52ce-8d28-46633e57077f", 00:23:56.541 "is_configured": true, 00:23:56.541 "data_offset": 0, 00:23:56.541 "data_size": 65536 00:23:56.541 }, 00:23:56.541 { 00:23:56.541 "name": "BaseBdev2", 00:23:56.541 "uuid": "9107e1f7-38b9-588d-a4bf-efea7a04a287", 00:23:56.541 "is_configured": true, 00:23:56.541 "data_offset": 0, 00:23:56.541 "data_size": 65536 00:23:56.541 }, 00:23:56.541 { 00:23:56.541 "name": "BaseBdev3", 00:23:56.541 "uuid": "e000c13d-7896-56ab-a274-58d4f51062c5", 00:23:56.541 "is_configured": true, 00:23:56.541 "data_offset": 0, 00:23:56.541 "data_size": 65536 00:23:56.541 }, 00:23:56.541 { 00:23:56.541 "name": "BaseBdev4", 00:23:56.541 "uuid": "df6e0788-b9b9-585c-8122-92eb4dc5f532", 00:23:56.541 "is_configured": true, 00:23:56.541 "data_offset": 0, 00:23:56.541 "data_size": 65536 00:23:56.541 } 00:23:56.541 ] 00:23:56.541 }' 00:23:56.541 12:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:56.541 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.105 12:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:57.105 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.105 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.105 12:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:23:57.105 [2024-12-05 12:55:39.389427] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:57.105 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.105 12:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:23:57.105 12:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:57.105 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.105 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.105 12:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:57.105 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.105 12:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:23:57.105 12:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:23:57.105 12:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:23:57.105 12:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:23:57.105 12:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:23:57.105 12:55:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:57.105 12:55:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:57.105 12:55:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:57.105 12:55:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:57.105 12:55:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:57.105 12:55:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:23:57.105 12:55:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:57.105 12:55:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:57.105 12:55:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:57.105 [2024-12-05 12:55:39.637219] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:57.105 /dev/nbd0 00:23:57.105 12:55:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:57.105 12:55:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:57.105 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:57.105 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:23:57.105 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:57.106 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:57.106 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:57.106 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:23:57.106 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:57.106 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:57.106 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:57.106 1+0 records in 00:23:57.106 1+0 records out 00:23:57.106 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000158963 s, 25.8 MB/s 00:23:57.106 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:57.106 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:23:57.106 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:57.106 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:57.106 12:55:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:23:57.106 12:55:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:57.106 12:55:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:57.106 12:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:23:57.106 12:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:23:57.106 12:55:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:24:03.744 65536+0 records in 00:24:03.744 65536+0 records out 00:24:03.744 33554432 bytes (34 MB, 32 MiB) copied, 5.52528 s, 6.1 MB/s 00:24:03.744 12:55:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:24:03.744 12:55:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:03.744 12:55:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:03.744 12:55:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:03.744 12:55:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:24:03.744 12:55:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:03.744 12:55:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:03.744 [2024-12-05 12:55:45.360721] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:03.745 12:55:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:03.745 12:55:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:03.745 12:55:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:03.745 12:55:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:03.745 12:55:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:03.745 12:55:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:03.745 12:55:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:24:03.745 12:55:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:24:03.745 12:55:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:24:03.745 12:55:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.745 12:55:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.745 [2024-12-05 12:55:45.390523] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:03.745 12:55:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.745 12:55:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:03.745 12:55:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:03.745 12:55:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:03.745 12:55:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:03.745 12:55:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:03.745 12:55:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:03.745 12:55:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:03.745 12:55:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:03.745 12:55:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:03.745 12:55:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:03.745 12:55:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:03.745 12:55:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.745 12:55:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.745 12:55:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.745 12:55:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.745 12:55:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:03.745 "name": "raid_bdev1", 00:24:03.745 "uuid": "9df31d42-218b-4f1c-a3d4-f306b984d69b", 00:24:03.745 "strip_size_kb": 0, 00:24:03.745 "state": "online", 00:24:03.745 "raid_level": "raid1", 00:24:03.745 "superblock": false, 00:24:03.745 "num_base_bdevs": 4, 00:24:03.745 "num_base_bdevs_discovered": 3, 00:24:03.745 "num_base_bdevs_operational": 3, 00:24:03.745 "base_bdevs_list": [ 00:24:03.745 { 00:24:03.745 "name": null, 00:24:03.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:03.745 "is_configured": false, 00:24:03.745 "data_offset": 0, 00:24:03.745 "data_size": 65536 00:24:03.745 }, 00:24:03.745 { 00:24:03.745 "name": "BaseBdev2", 00:24:03.745 "uuid": "9107e1f7-38b9-588d-a4bf-efea7a04a287", 00:24:03.745 "is_configured": true, 00:24:03.745 "data_offset": 0, 00:24:03.745 "data_size": 65536 00:24:03.745 }, 00:24:03.745 { 00:24:03.745 "name": "BaseBdev3", 00:24:03.745 "uuid": "e000c13d-7896-56ab-a274-58d4f51062c5", 00:24:03.745 "is_configured": true, 00:24:03.745 "data_offset": 0, 00:24:03.745 "data_size": 65536 00:24:03.745 }, 00:24:03.745 { 00:24:03.745 "name": "BaseBdev4", 00:24:03.745 "uuid": "df6e0788-b9b9-585c-8122-92eb4dc5f532", 00:24:03.745 "is_configured": true, 00:24:03.745 "data_offset": 0, 00:24:03.745 "data_size": 65536 00:24:03.745 } 00:24:03.745 ] 00:24:03.745 }' 00:24:03.745 12:55:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:03.745 12:55:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.745 12:55:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:03.745 12:55:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.745 12:55:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.745 [2024-12-05 12:55:45.706564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:03.745 [2024-12-05 12:55:45.714729] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:24:03.745 12:55:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.745 12:55:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:24:03.745 [2024-12-05 12:55:45.716263] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:04.311 12:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:04.311 12:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:04.311 12:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:04.311 12:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:04.311 12:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:04.311 12:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:04.311 12:55:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.311 12:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.311 12:55:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.311 12:55:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.311 12:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:04.311 "name": "raid_bdev1", 00:24:04.311 "uuid": "9df31d42-218b-4f1c-a3d4-f306b984d69b", 00:24:04.311 "strip_size_kb": 0, 00:24:04.311 "state": "online", 00:24:04.311 "raid_level": "raid1", 00:24:04.311 "superblock": false, 00:24:04.311 "num_base_bdevs": 4, 00:24:04.311 "num_base_bdevs_discovered": 4, 00:24:04.311 "num_base_bdevs_operational": 4, 00:24:04.311 "process": { 00:24:04.311 "type": "rebuild", 00:24:04.311 "target": "spare", 00:24:04.311 "progress": { 00:24:04.311 "blocks": 20480, 00:24:04.311 "percent": 31 00:24:04.311 } 00:24:04.311 }, 00:24:04.311 "base_bdevs_list": [ 00:24:04.311 { 00:24:04.311 "name": "spare", 00:24:04.311 "uuid": "588a448c-9ef5-5af2-8d00-af37a2af139a", 00:24:04.311 "is_configured": true, 00:24:04.311 "data_offset": 0, 00:24:04.311 "data_size": 65536 00:24:04.311 }, 00:24:04.311 { 00:24:04.311 "name": "BaseBdev2", 00:24:04.311 "uuid": "9107e1f7-38b9-588d-a4bf-efea7a04a287", 00:24:04.311 "is_configured": true, 00:24:04.311 "data_offset": 0, 00:24:04.311 "data_size": 65536 00:24:04.311 }, 00:24:04.311 { 00:24:04.311 "name": "BaseBdev3", 00:24:04.311 "uuid": "e000c13d-7896-56ab-a274-58d4f51062c5", 00:24:04.311 "is_configured": true, 00:24:04.311 "data_offset": 0, 00:24:04.311 "data_size": 65536 00:24:04.311 }, 00:24:04.311 { 00:24:04.311 "name": "BaseBdev4", 00:24:04.311 "uuid": "df6e0788-b9b9-585c-8122-92eb4dc5f532", 00:24:04.311 "is_configured": true, 00:24:04.311 "data_offset": 0, 00:24:04.311 "data_size": 65536 00:24:04.311 } 00:24:04.311 ] 00:24:04.311 }' 00:24:04.311 12:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:04.311 12:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:04.311 12:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:04.311 12:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:04.311 12:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:04.311 12:55:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.311 12:55:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.311 [2024-12-05 12:55:46.806451] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:04.311 [2024-12-05 12:55:46.821142] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:04.311 [2024-12-05 12:55:46.821194] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:04.311 [2024-12-05 12:55:46.821209] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:04.311 [2024-12-05 12:55:46.821218] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:04.311 12:55:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.311 12:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:04.311 12:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:04.311 12:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:04.311 12:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:04.311 12:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:04.311 12:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:04.311 12:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:04.311 12:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:04.311 12:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:04.311 12:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:04.311 12:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:04.311 12:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.311 12:55:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.311 12:55:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.311 12:55:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.311 12:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:04.311 "name": "raid_bdev1", 00:24:04.311 "uuid": "9df31d42-218b-4f1c-a3d4-f306b984d69b", 00:24:04.312 "strip_size_kb": 0, 00:24:04.312 "state": "online", 00:24:04.312 "raid_level": "raid1", 00:24:04.312 "superblock": false, 00:24:04.312 "num_base_bdevs": 4, 00:24:04.312 "num_base_bdevs_discovered": 3, 00:24:04.312 "num_base_bdevs_operational": 3, 00:24:04.312 "base_bdevs_list": [ 00:24:04.312 { 00:24:04.312 "name": null, 00:24:04.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.312 "is_configured": false, 00:24:04.312 "data_offset": 0, 00:24:04.312 "data_size": 65536 00:24:04.312 }, 00:24:04.312 { 00:24:04.312 "name": "BaseBdev2", 00:24:04.312 "uuid": "9107e1f7-38b9-588d-a4bf-efea7a04a287", 00:24:04.312 "is_configured": true, 00:24:04.312 "data_offset": 0, 00:24:04.312 "data_size": 65536 00:24:04.312 }, 00:24:04.312 { 00:24:04.312 "name": "BaseBdev3", 00:24:04.312 "uuid": "e000c13d-7896-56ab-a274-58d4f51062c5", 00:24:04.312 "is_configured": true, 00:24:04.312 "data_offset": 0, 00:24:04.312 "data_size": 65536 00:24:04.312 }, 00:24:04.312 { 00:24:04.312 "name": "BaseBdev4", 00:24:04.312 "uuid": "df6e0788-b9b9-585c-8122-92eb4dc5f532", 00:24:04.312 "is_configured": true, 00:24:04.312 "data_offset": 0, 00:24:04.312 "data_size": 65536 00:24:04.312 } 00:24:04.312 ] 00:24:04.312 }' 00:24:04.312 12:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:04.312 12:55:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.569 12:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:04.569 12:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:04.569 12:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:04.569 12:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:04.569 12:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:04.569 12:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.569 12:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:04.569 12:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.569 12:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.569 12:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.827 12:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:04.827 "name": "raid_bdev1", 00:24:04.827 "uuid": "9df31d42-218b-4f1c-a3d4-f306b984d69b", 00:24:04.827 "strip_size_kb": 0, 00:24:04.827 "state": "online", 00:24:04.827 "raid_level": "raid1", 00:24:04.827 "superblock": false, 00:24:04.827 "num_base_bdevs": 4, 00:24:04.827 "num_base_bdevs_discovered": 3, 00:24:04.827 "num_base_bdevs_operational": 3, 00:24:04.827 "base_bdevs_list": [ 00:24:04.827 { 00:24:04.827 "name": null, 00:24:04.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.827 "is_configured": false, 00:24:04.827 "data_offset": 0, 00:24:04.827 "data_size": 65536 00:24:04.827 }, 00:24:04.827 { 00:24:04.827 "name": "BaseBdev2", 00:24:04.827 "uuid": "9107e1f7-38b9-588d-a4bf-efea7a04a287", 00:24:04.827 "is_configured": true, 00:24:04.827 "data_offset": 0, 00:24:04.827 "data_size": 65536 00:24:04.827 }, 00:24:04.827 { 00:24:04.827 "name": "BaseBdev3", 00:24:04.827 "uuid": "e000c13d-7896-56ab-a274-58d4f51062c5", 00:24:04.827 "is_configured": true, 00:24:04.827 "data_offset": 0, 00:24:04.827 "data_size": 65536 00:24:04.827 }, 00:24:04.827 { 00:24:04.827 "name": "BaseBdev4", 00:24:04.827 "uuid": "df6e0788-b9b9-585c-8122-92eb4dc5f532", 00:24:04.827 "is_configured": true, 00:24:04.827 "data_offset": 0, 00:24:04.827 "data_size": 65536 00:24:04.827 } 00:24:04.827 ] 00:24:04.827 }' 00:24:04.827 12:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:04.827 12:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:04.827 12:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:04.827 12:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:04.827 12:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:04.827 12:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.827 12:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.827 [2024-12-05 12:55:47.229405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:04.827 [2024-12-05 12:55:47.236991] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:24:04.827 12:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.827 12:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:24:04.827 [2024-12-05 12:55:47.238608] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:05.759 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:05.759 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:05.759 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:05.759 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:05.759 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:05.759 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.759 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:05.759 12:55:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.759 12:55:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:05.759 12:55:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.759 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:05.759 "name": "raid_bdev1", 00:24:05.759 "uuid": "9df31d42-218b-4f1c-a3d4-f306b984d69b", 00:24:05.759 "strip_size_kb": 0, 00:24:05.759 "state": "online", 00:24:05.759 "raid_level": "raid1", 00:24:05.759 "superblock": false, 00:24:05.759 "num_base_bdevs": 4, 00:24:05.759 "num_base_bdevs_discovered": 4, 00:24:05.759 "num_base_bdevs_operational": 4, 00:24:05.759 "process": { 00:24:05.759 "type": "rebuild", 00:24:05.759 "target": "spare", 00:24:05.759 "progress": { 00:24:05.759 "blocks": 20480, 00:24:05.759 "percent": 31 00:24:05.759 } 00:24:05.759 }, 00:24:05.759 "base_bdevs_list": [ 00:24:05.759 { 00:24:05.759 "name": "spare", 00:24:05.759 "uuid": "588a448c-9ef5-5af2-8d00-af37a2af139a", 00:24:05.759 "is_configured": true, 00:24:05.759 "data_offset": 0, 00:24:05.759 "data_size": 65536 00:24:05.759 }, 00:24:05.759 { 00:24:05.759 "name": "BaseBdev2", 00:24:05.759 "uuid": "9107e1f7-38b9-588d-a4bf-efea7a04a287", 00:24:05.759 "is_configured": true, 00:24:05.759 "data_offset": 0, 00:24:05.759 "data_size": 65536 00:24:05.759 }, 00:24:05.759 { 00:24:05.759 "name": "BaseBdev3", 00:24:05.759 "uuid": "e000c13d-7896-56ab-a274-58d4f51062c5", 00:24:05.759 "is_configured": true, 00:24:05.759 "data_offset": 0, 00:24:05.759 "data_size": 65536 00:24:05.759 }, 00:24:05.759 { 00:24:05.759 "name": "BaseBdev4", 00:24:05.759 "uuid": "df6e0788-b9b9-585c-8122-92eb4dc5f532", 00:24:05.759 "is_configured": true, 00:24:05.759 "data_offset": 0, 00:24:05.759 "data_size": 65536 00:24:05.759 } 00:24:05.759 ] 00:24:05.759 }' 00:24:05.759 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:05.759 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:05.759 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:05.759 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:05.759 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:24:05.759 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:24:05.759 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:24:05.759 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:24:05.759 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:24:05.759 12:55:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.759 12:55:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:06.017 [2024-12-05 12:55:48.344797] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:06.017 [2024-12-05 12:55:48.443784] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:24:06.017 12:55:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.017 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:24:06.017 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:24:06.017 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:06.017 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:06.017 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:06.017 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:06.017 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:06.017 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:06.017 12:55:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.017 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:06.017 12:55:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:06.017 12:55:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.017 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:06.017 "name": "raid_bdev1", 00:24:06.017 "uuid": "9df31d42-218b-4f1c-a3d4-f306b984d69b", 00:24:06.017 "strip_size_kb": 0, 00:24:06.017 "state": "online", 00:24:06.017 "raid_level": "raid1", 00:24:06.017 "superblock": false, 00:24:06.017 "num_base_bdevs": 4, 00:24:06.017 "num_base_bdevs_discovered": 3, 00:24:06.017 "num_base_bdevs_operational": 3, 00:24:06.017 "process": { 00:24:06.017 "type": "rebuild", 00:24:06.017 "target": "spare", 00:24:06.017 "progress": { 00:24:06.017 "blocks": 24576, 00:24:06.017 "percent": 37 00:24:06.017 } 00:24:06.017 }, 00:24:06.017 "base_bdevs_list": [ 00:24:06.017 { 00:24:06.017 "name": "spare", 00:24:06.017 "uuid": "588a448c-9ef5-5af2-8d00-af37a2af139a", 00:24:06.017 "is_configured": true, 00:24:06.017 "data_offset": 0, 00:24:06.017 "data_size": 65536 00:24:06.017 }, 00:24:06.017 { 00:24:06.017 "name": null, 00:24:06.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.017 "is_configured": false, 00:24:06.017 "data_offset": 0, 00:24:06.017 "data_size": 65536 00:24:06.017 }, 00:24:06.017 { 00:24:06.017 "name": "BaseBdev3", 00:24:06.017 "uuid": "e000c13d-7896-56ab-a274-58d4f51062c5", 00:24:06.017 "is_configured": true, 00:24:06.017 "data_offset": 0, 00:24:06.017 "data_size": 65536 00:24:06.017 }, 00:24:06.017 { 00:24:06.017 "name": "BaseBdev4", 00:24:06.017 "uuid": "df6e0788-b9b9-585c-8122-92eb4dc5f532", 00:24:06.017 "is_configured": true, 00:24:06.017 "data_offset": 0, 00:24:06.017 "data_size": 65536 00:24:06.017 } 00:24:06.017 ] 00:24:06.017 }' 00:24:06.017 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:06.017 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:06.017 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:06.017 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:06.017 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=341 00:24:06.017 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:06.017 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:06.017 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:06.017 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:06.017 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:06.017 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:06.017 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:06.017 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:06.017 12:55:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.017 12:55:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:06.017 12:55:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.017 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:06.017 "name": "raid_bdev1", 00:24:06.017 "uuid": "9df31d42-218b-4f1c-a3d4-f306b984d69b", 00:24:06.017 "strip_size_kb": 0, 00:24:06.017 "state": "online", 00:24:06.017 "raid_level": "raid1", 00:24:06.017 "superblock": false, 00:24:06.017 "num_base_bdevs": 4, 00:24:06.017 "num_base_bdevs_discovered": 3, 00:24:06.017 "num_base_bdevs_operational": 3, 00:24:06.017 "process": { 00:24:06.017 "type": "rebuild", 00:24:06.017 "target": "spare", 00:24:06.017 "progress": { 00:24:06.017 "blocks": 26624, 00:24:06.017 "percent": 40 00:24:06.017 } 00:24:06.017 }, 00:24:06.017 "base_bdevs_list": [ 00:24:06.017 { 00:24:06.017 "name": "spare", 00:24:06.017 "uuid": "588a448c-9ef5-5af2-8d00-af37a2af139a", 00:24:06.017 "is_configured": true, 00:24:06.017 "data_offset": 0, 00:24:06.017 "data_size": 65536 00:24:06.017 }, 00:24:06.017 { 00:24:06.017 "name": null, 00:24:06.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.017 "is_configured": false, 00:24:06.017 "data_offset": 0, 00:24:06.017 "data_size": 65536 00:24:06.017 }, 00:24:06.017 { 00:24:06.017 "name": "BaseBdev3", 00:24:06.017 "uuid": "e000c13d-7896-56ab-a274-58d4f51062c5", 00:24:06.017 "is_configured": true, 00:24:06.017 "data_offset": 0, 00:24:06.017 "data_size": 65536 00:24:06.017 }, 00:24:06.017 { 00:24:06.017 "name": "BaseBdev4", 00:24:06.017 "uuid": "df6e0788-b9b9-585c-8122-92eb4dc5f532", 00:24:06.017 "is_configured": true, 00:24:06.017 "data_offset": 0, 00:24:06.017 "data_size": 65536 00:24:06.017 } 00:24:06.017 ] 00:24:06.017 }' 00:24:06.017 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:06.275 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:06.275 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:06.275 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:06.275 12:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:07.208 12:55:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:07.208 12:55:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:07.208 12:55:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:07.208 12:55:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:07.208 12:55:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:07.208 12:55:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:07.208 12:55:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:07.208 12:55:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.208 12:55:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:07.208 12:55:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.208 12:55:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.208 12:55:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:07.208 "name": "raid_bdev1", 00:24:07.209 "uuid": "9df31d42-218b-4f1c-a3d4-f306b984d69b", 00:24:07.209 "strip_size_kb": 0, 00:24:07.209 "state": "online", 00:24:07.209 "raid_level": "raid1", 00:24:07.209 "superblock": false, 00:24:07.209 "num_base_bdevs": 4, 00:24:07.209 "num_base_bdevs_discovered": 3, 00:24:07.209 "num_base_bdevs_operational": 3, 00:24:07.209 "process": { 00:24:07.209 "type": "rebuild", 00:24:07.209 "target": "spare", 00:24:07.209 "progress": { 00:24:07.209 "blocks": 47104, 00:24:07.209 "percent": 71 00:24:07.209 } 00:24:07.209 }, 00:24:07.209 "base_bdevs_list": [ 00:24:07.209 { 00:24:07.209 "name": "spare", 00:24:07.209 "uuid": "588a448c-9ef5-5af2-8d00-af37a2af139a", 00:24:07.209 "is_configured": true, 00:24:07.209 "data_offset": 0, 00:24:07.209 "data_size": 65536 00:24:07.209 }, 00:24:07.209 { 00:24:07.209 "name": null, 00:24:07.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.209 "is_configured": false, 00:24:07.209 "data_offset": 0, 00:24:07.209 "data_size": 65536 00:24:07.209 }, 00:24:07.209 { 00:24:07.209 "name": "BaseBdev3", 00:24:07.209 "uuid": "e000c13d-7896-56ab-a274-58d4f51062c5", 00:24:07.209 "is_configured": true, 00:24:07.209 "data_offset": 0, 00:24:07.209 "data_size": 65536 00:24:07.209 }, 00:24:07.209 { 00:24:07.209 "name": "BaseBdev4", 00:24:07.209 "uuid": "df6e0788-b9b9-585c-8122-92eb4dc5f532", 00:24:07.209 "is_configured": true, 00:24:07.209 "data_offset": 0, 00:24:07.209 "data_size": 65536 00:24:07.209 } 00:24:07.209 ] 00:24:07.209 }' 00:24:07.209 12:55:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:07.209 12:55:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:07.209 12:55:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:07.209 12:55:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:07.209 12:55:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:08.142 [2024-12-05 12:55:50.452352] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:08.142 [2024-12-05 12:55:50.452425] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:08.142 [2024-12-05 12:55:50.452470] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:08.399 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:08.399 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:08.399 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:08.399 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:08.399 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:08.399 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:08.399 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:08.399 12:55:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:08.400 "name": "raid_bdev1", 00:24:08.400 "uuid": "9df31d42-218b-4f1c-a3d4-f306b984d69b", 00:24:08.400 "strip_size_kb": 0, 00:24:08.400 "state": "online", 00:24:08.400 "raid_level": "raid1", 00:24:08.400 "superblock": false, 00:24:08.400 "num_base_bdevs": 4, 00:24:08.400 "num_base_bdevs_discovered": 3, 00:24:08.400 "num_base_bdevs_operational": 3, 00:24:08.400 "base_bdevs_list": [ 00:24:08.400 { 00:24:08.400 "name": "spare", 00:24:08.400 "uuid": "588a448c-9ef5-5af2-8d00-af37a2af139a", 00:24:08.400 "is_configured": true, 00:24:08.400 "data_offset": 0, 00:24:08.400 "data_size": 65536 00:24:08.400 }, 00:24:08.400 { 00:24:08.400 "name": null, 00:24:08.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:08.400 "is_configured": false, 00:24:08.400 "data_offset": 0, 00:24:08.400 "data_size": 65536 00:24:08.400 }, 00:24:08.400 { 00:24:08.400 "name": "BaseBdev3", 00:24:08.400 "uuid": "e000c13d-7896-56ab-a274-58d4f51062c5", 00:24:08.400 "is_configured": true, 00:24:08.400 "data_offset": 0, 00:24:08.400 "data_size": 65536 00:24:08.400 }, 00:24:08.400 { 00:24:08.400 "name": "BaseBdev4", 00:24:08.400 "uuid": "df6e0788-b9b9-585c-8122-92eb4dc5f532", 00:24:08.400 "is_configured": true, 00:24:08.400 "data_offset": 0, 00:24:08.400 "data_size": 65536 00:24:08.400 } 00:24:08.400 ] 00:24:08.400 }' 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:08.400 "name": "raid_bdev1", 00:24:08.400 "uuid": "9df31d42-218b-4f1c-a3d4-f306b984d69b", 00:24:08.400 "strip_size_kb": 0, 00:24:08.400 "state": "online", 00:24:08.400 "raid_level": "raid1", 00:24:08.400 "superblock": false, 00:24:08.400 "num_base_bdevs": 4, 00:24:08.400 "num_base_bdevs_discovered": 3, 00:24:08.400 "num_base_bdevs_operational": 3, 00:24:08.400 "base_bdevs_list": [ 00:24:08.400 { 00:24:08.400 "name": "spare", 00:24:08.400 "uuid": "588a448c-9ef5-5af2-8d00-af37a2af139a", 00:24:08.400 "is_configured": true, 00:24:08.400 "data_offset": 0, 00:24:08.400 "data_size": 65536 00:24:08.400 }, 00:24:08.400 { 00:24:08.400 "name": null, 00:24:08.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:08.400 "is_configured": false, 00:24:08.400 "data_offset": 0, 00:24:08.400 "data_size": 65536 00:24:08.400 }, 00:24:08.400 { 00:24:08.400 "name": "BaseBdev3", 00:24:08.400 "uuid": "e000c13d-7896-56ab-a274-58d4f51062c5", 00:24:08.400 "is_configured": true, 00:24:08.400 "data_offset": 0, 00:24:08.400 "data_size": 65536 00:24:08.400 }, 00:24:08.400 { 00:24:08.400 "name": "BaseBdev4", 00:24:08.400 "uuid": "df6e0788-b9b9-585c-8122-92eb4dc5f532", 00:24:08.400 "is_configured": true, 00:24:08.400 "data_offset": 0, 00:24:08.400 "data_size": 65536 00:24:08.400 } 00:24:08.400 ] 00:24:08.400 }' 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:08.400 "name": "raid_bdev1", 00:24:08.400 "uuid": "9df31d42-218b-4f1c-a3d4-f306b984d69b", 00:24:08.400 "strip_size_kb": 0, 00:24:08.400 "state": "online", 00:24:08.400 "raid_level": "raid1", 00:24:08.400 "superblock": false, 00:24:08.400 "num_base_bdevs": 4, 00:24:08.400 "num_base_bdevs_discovered": 3, 00:24:08.400 "num_base_bdevs_operational": 3, 00:24:08.400 "base_bdevs_list": [ 00:24:08.400 { 00:24:08.400 "name": "spare", 00:24:08.400 "uuid": "588a448c-9ef5-5af2-8d00-af37a2af139a", 00:24:08.400 "is_configured": true, 00:24:08.400 "data_offset": 0, 00:24:08.400 "data_size": 65536 00:24:08.400 }, 00:24:08.400 { 00:24:08.400 "name": null, 00:24:08.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:08.400 "is_configured": false, 00:24:08.400 "data_offset": 0, 00:24:08.400 "data_size": 65536 00:24:08.400 }, 00:24:08.400 { 00:24:08.400 "name": "BaseBdev3", 00:24:08.400 "uuid": "e000c13d-7896-56ab-a274-58d4f51062c5", 00:24:08.400 "is_configured": true, 00:24:08.400 "data_offset": 0, 00:24:08.400 "data_size": 65536 00:24:08.400 }, 00:24:08.400 { 00:24:08.400 "name": "BaseBdev4", 00:24:08.400 "uuid": "df6e0788-b9b9-585c-8122-92eb4dc5f532", 00:24:08.400 "is_configured": true, 00:24:08.400 "data_offset": 0, 00:24:08.400 "data_size": 65536 00:24:08.400 } 00:24:08.400 ] 00:24:08.400 }' 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:08.400 12:55:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.657 12:55:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:08.657 12:55:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.657 12:55:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.657 [2024-12-05 12:55:51.228614] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:08.657 [2024-12-05 12:55:51.228643] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:08.657 [2024-12-05 12:55:51.228706] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:08.657 [2024-12-05 12:55:51.228772] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:08.657 [2024-12-05 12:55:51.228780] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:08.657 12:55:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.657 12:55:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:24:08.657 12:55:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:08.657 12:55:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.657 12:55:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.914 12:55:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.914 12:55:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:24:08.914 12:55:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:08.914 12:55:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:24:08.914 12:55:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:08.914 12:55:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:08.914 12:55:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:08.914 12:55:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:08.914 12:55:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:08.914 12:55:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:08.914 12:55:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:24:08.914 12:55:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:08.914 12:55:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:08.914 12:55:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:08.914 /dev/nbd0 00:24:09.171 12:55:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:09.171 12:55:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:09.171 12:55:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:09.171 12:55:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:24:09.171 12:55:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:09.171 12:55:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:09.171 12:55:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:09.171 12:55:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:24:09.171 12:55:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:09.171 12:55:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:09.171 12:55:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:09.171 1+0 records in 00:24:09.171 1+0 records out 00:24:09.171 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225068 s, 18.2 MB/s 00:24:09.171 12:55:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:09.171 12:55:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:24:09.171 12:55:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:09.171 12:55:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:09.171 12:55:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:24:09.171 12:55:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:09.171 12:55:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:09.171 12:55:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:24:09.171 /dev/nbd1 00:24:09.171 12:55:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:09.171 12:55:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:09.171 12:55:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:24:09.171 12:55:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:24:09.171 12:55:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:09.171 12:55:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:09.171 12:55:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:24:09.171 12:55:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:24:09.171 12:55:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:09.171 12:55:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:09.171 12:55:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:09.171 1+0 records in 00:24:09.171 1+0 records out 00:24:09.171 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002445 s, 16.8 MB/s 00:24:09.171 12:55:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:09.171 12:55:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:24:09.171 12:55:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:09.171 12:55:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:09.171 12:55:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:24:09.171 12:55:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:09.171 12:55:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:09.171 12:55:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:09.428 12:55:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:24:09.428 12:55:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:09.428 12:55:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:09.428 12:55:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:09.428 12:55:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:24:09.428 12:55:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:09.428 12:55:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:09.685 12:55:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:09.685 12:55:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:09.685 12:55:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:09.685 12:55:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:09.685 12:55:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:09.685 12:55:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:09.685 12:55:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:24:09.685 12:55:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:24:09.685 12:55:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:09.685 12:55:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:24:09.685 12:55:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:09.685 12:55:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:09.685 12:55:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:09.685 12:55:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:09.685 12:55:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:09.685 12:55:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:09.685 12:55:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:24:09.686 12:55:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:24:09.686 12:55:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:24:09.686 12:55:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75211 00:24:09.686 12:55:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75211 ']' 00:24:09.686 12:55:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75211 00:24:09.686 12:55:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:24:09.686 12:55:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:09.686 12:55:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75211 00:24:09.686 12:55:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:09.686 12:55:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:09.686 12:55:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75211' 00:24:09.686 killing process with pid 75211 00:24:09.686 12:55:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75211 00:24:09.686 Received shutdown signal, test time was about 60.000000 seconds 00:24:09.686 00:24:09.686 Latency(us) 00:24:09.686 [2024-12-05T12:55:52.273Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:09.686 [2024-12-05T12:55:52.273Z] =================================================================================================================== 00:24:09.686 [2024-12-05T12:55:52.273Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:09.686 [2024-12-05 12:55:52.261992] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:09.686 12:55:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75211 00:24:09.943 [2024-12-05 12:55:52.497268] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:10.507 12:55:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:24:10.507 00:24:10.507 real 0m15.095s 00:24:10.507 user 0m16.447s 00:24:10.507 sys 0m2.620s 00:24:10.507 12:55:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:10.507 12:55:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:10.507 ************************************ 00:24:10.507 END TEST raid_rebuild_test 00:24:10.507 ************************************ 00:24:10.507 12:55:53 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:24:10.507 12:55:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:24:10.507 12:55:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:10.507 12:55:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:10.764 ************************************ 00:24:10.764 START TEST raid_rebuild_test_sb 00:24:10.764 ************************************ 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75637 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75637 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75637 ']' 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:10.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:10.764 12:55:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:10.764 [2024-12-05 12:55:53.172498] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:24:10.764 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:10.764 Zero copy mechanism will not be used. 00:24:10.764 [2024-12-05 12:55:53.172650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75637 ] 00:24:10.764 [2024-12-05 12:55:53.332105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.021 [2024-12-05 12:55:53.434589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.021 [2024-12-05 12:55:53.570382] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:11.021 [2024-12-05 12:55:53.570433] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:11.585 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:11.585 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:24:11.585 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:11.585 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:11.585 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.585 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:11.585 BaseBdev1_malloc 00:24:11.585 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.585 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:11.585 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.585 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:11.585 [2024-12-05 12:55:54.058313] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:11.585 [2024-12-05 12:55:54.058373] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:11.585 [2024-12-05 12:55:54.058394] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:11.585 [2024-12-05 12:55:54.058406] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:11.585 [2024-12-05 12:55:54.060525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:11.585 [2024-12-05 12:55:54.060562] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:11.585 BaseBdev1 00:24:11.585 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.585 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:11.585 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:11.585 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.586 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:11.586 BaseBdev2_malloc 00:24:11.586 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.586 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:11.586 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.586 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:11.586 [2024-12-05 12:55:54.094213] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:11.586 [2024-12-05 12:55:54.094268] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:11.586 [2024-12-05 12:55:54.094291] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:11.586 [2024-12-05 12:55:54.094302] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:11.586 [2024-12-05 12:55:54.096372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:11.586 [2024-12-05 12:55:54.096408] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:11.586 BaseBdev2 00:24:11.586 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.586 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:11.586 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:11.586 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.586 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:11.586 BaseBdev3_malloc 00:24:11.586 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.586 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:11.586 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.586 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:11.586 [2024-12-05 12:55:54.150019] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:11.586 [2024-12-05 12:55:54.150073] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:11.586 [2024-12-05 12:55:54.150095] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:11.586 [2024-12-05 12:55:54.150106] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:11.586 [2024-12-05 12:55:54.152210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:11.586 [2024-12-05 12:55:54.152248] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:11.586 BaseBdev3 00:24:11.586 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.586 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:11.586 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:11.586 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.586 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:11.842 BaseBdev4_malloc 00:24:11.842 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.842 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:11.842 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.842 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:11.842 [2024-12-05 12:55:54.186008] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:11.842 [2024-12-05 12:55:54.186061] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:11.842 [2024-12-05 12:55:54.186079] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:24:11.842 [2024-12-05 12:55:54.186093] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:11.842 [2024-12-05 12:55:54.188194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:11.842 [2024-12-05 12:55:54.188230] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:11.842 BaseBdev4 00:24:11.842 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.842 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:24:11.842 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.842 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:11.842 spare_malloc 00:24:11.842 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.843 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:11.843 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.843 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:11.843 spare_delay 00:24:11.843 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.843 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:11.843 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.843 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:11.843 [2024-12-05 12:55:54.230317] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:11.843 [2024-12-05 12:55:54.230367] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:11.843 [2024-12-05 12:55:54.230384] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:24:11.843 [2024-12-05 12:55:54.230394] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:11.843 [2024-12-05 12:55:54.232466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:11.843 [2024-12-05 12:55:54.232514] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:11.843 spare 00:24:11.843 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.843 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:24:11.843 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.843 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:11.843 [2024-12-05 12:55:54.238369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:11.843 [2024-12-05 12:55:54.240181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:11.843 [2024-12-05 12:55:54.240246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:11.843 [2024-12-05 12:55:54.240297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:11.843 [2024-12-05 12:55:54.240471] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:11.843 [2024-12-05 12:55:54.240503] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:11.843 [2024-12-05 12:55:54.240747] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:11.843 [2024-12-05 12:55:54.240907] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:11.843 [2024-12-05 12:55:54.240921] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:11.843 [2024-12-05 12:55:54.241056] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:11.843 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.843 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:24:11.843 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:11.843 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:11.843 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:11.843 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:11.843 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:11.843 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:11.843 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:11.843 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:11.843 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:11.843 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:11.843 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.843 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.843 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:11.843 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.843 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:11.843 "name": "raid_bdev1", 00:24:11.843 "uuid": "129b31e9-4198-4560-90a3-7d3364a213ea", 00:24:11.843 "strip_size_kb": 0, 00:24:11.843 "state": "online", 00:24:11.843 "raid_level": "raid1", 00:24:11.843 "superblock": true, 00:24:11.843 "num_base_bdevs": 4, 00:24:11.843 "num_base_bdevs_discovered": 4, 00:24:11.843 "num_base_bdevs_operational": 4, 00:24:11.843 "base_bdevs_list": [ 00:24:11.843 { 00:24:11.843 "name": "BaseBdev1", 00:24:11.843 "uuid": "84a9d88a-a462-526c-9e73-3c7a9ab8a177", 00:24:11.843 "is_configured": true, 00:24:11.843 "data_offset": 2048, 00:24:11.843 "data_size": 63488 00:24:11.843 }, 00:24:11.843 { 00:24:11.843 "name": "BaseBdev2", 00:24:11.843 "uuid": "09070ead-b0a8-508e-821e-cce4cd16d8dc", 00:24:11.843 "is_configured": true, 00:24:11.843 "data_offset": 2048, 00:24:11.843 "data_size": 63488 00:24:11.843 }, 00:24:11.843 { 00:24:11.843 "name": "BaseBdev3", 00:24:11.843 "uuid": "0152feb4-cb0c-5bfc-a805-2d1b38846375", 00:24:11.843 "is_configured": true, 00:24:11.843 "data_offset": 2048, 00:24:11.843 "data_size": 63488 00:24:11.843 }, 00:24:11.843 { 00:24:11.843 "name": "BaseBdev4", 00:24:11.843 "uuid": "c82324c3-bb39-5598-bee4-fd1c8de7f846", 00:24:11.843 "is_configured": true, 00:24:11.843 "data_offset": 2048, 00:24:11.843 "data_size": 63488 00:24:11.843 } 00:24:11.843 ] 00:24:11.843 }' 00:24:11.843 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:11.843 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.101 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:24:12.101 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:12.101 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.101 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.101 [2024-12-05 12:55:54.530785] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:12.101 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.101 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:24:12.101 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:12.101 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:12.101 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.101 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.101 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.101 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:24:12.101 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:24:12.101 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:24:12.101 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:24:12.101 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:24:12.101 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:12.101 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:12.101 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:12.101 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:12.101 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:12.101 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:24:12.101 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:12.101 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:12.101 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:12.360 [2024-12-05 12:55:54.722535] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:24:12.360 /dev/nbd0 00:24:12.360 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:12.360 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:12.360 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:12.360 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:24:12.360 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:12.360 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:12.360 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:12.360 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:24:12.360 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:12.360 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:12.360 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:12.360 1+0 records in 00:24:12.360 1+0 records out 00:24:12.360 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247587 s, 16.5 MB/s 00:24:12.360 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:12.360 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:24:12.360 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:12.360 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:12.360 12:55:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:24:12.360 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:12.360 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:12.360 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:24:12.360 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:24:12.360 12:55:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:24:18.912 63488+0 records in 00:24:18.912 63488+0 records out 00:24:18.912 32505856 bytes (33 MB, 31 MiB) copied, 5.44439 s, 6.0 MB/s 00:24:18.912 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:24:18.912 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:18.912 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:18.912 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:18.912 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:24:18.912 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:18.912 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:18.912 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:18.912 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:18.912 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:18.912 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:18.912 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:18.912 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:18.912 [2024-12-05 12:56:00.386380] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:18.912 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:18.912 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:18.912 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:24:18.912 12:56:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.912 12:56:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:18.912 [2024-12-05 12:56:00.392133] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:18.912 12:56:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.912 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:18.912 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:18.912 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:18.912 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:18.912 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:18.912 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:18.912 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:18.912 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:18.912 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:18.912 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:18.912 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:18.912 12:56:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.912 12:56:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:18.912 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:18.912 12:56:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.912 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:18.912 "name": "raid_bdev1", 00:24:18.912 "uuid": "129b31e9-4198-4560-90a3-7d3364a213ea", 00:24:18.912 "strip_size_kb": 0, 00:24:18.912 "state": "online", 00:24:18.912 "raid_level": "raid1", 00:24:18.912 "superblock": true, 00:24:18.912 "num_base_bdevs": 4, 00:24:18.912 "num_base_bdevs_discovered": 3, 00:24:18.912 "num_base_bdevs_operational": 3, 00:24:18.912 "base_bdevs_list": [ 00:24:18.912 { 00:24:18.912 "name": null, 00:24:18.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:18.912 "is_configured": false, 00:24:18.912 "data_offset": 0, 00:24:18.912 "data_size": 63488 00:24:18.912 }, 00:24:18.912 { 00:24:18.912 "name": "BaseBdev2", 00:24:18.912 "uuid": "09070ead-b0a8-508e-821e-cce4cd16d8dc", 00:24:18.912 "is_configured": true, 00:24:18.912 "data_offset": 2048, 00:24:18.912 "data_size": 63488 00:24:18.912 }, 00:24:18.912 { 00:24:18.912 "name": "BaseBdev3", 00:24:18.912 "uuid": "0152feb4-cb0c-5bfc-a805-2d1b38846375", 00:24:18.912 "is_configured": true, 00:24:18.912 "data_offset": 2048, 00:24:18.912 "data_size": 63488 00:24:18.912 }, 00:24:18.912 { 00:24:18.912 "name": "BaseBdev4", 00:24:18.913 "uuid": "c82324c3-bb39-5598-bee4-fd1c8de7f846", 00:24:18.913 "is_configured": true, 00:24:18.913 "data_offset": 2048, 00:24:18.913 "data_size": 63488 00:24:18.913 } 00:24:18.913 ] 00:24:18.913 }' 00:24:18.913 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:18.913 12:56:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:18.913 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:18.913 12:56:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.913 12:56:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:18.913 [2024-12-05 12:56:00.700189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:18.913 [2024-12-05 12:56:00.708335] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:24:18.913 12:56:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.913 12:56:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:24:18.913 [2024-12-05 12:56:00.709914] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:19.169 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:19.169 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:19.169 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:19.169 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:19.169 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:19.169 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:19.169 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:19.169 12:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.169 12:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.169 12:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.169 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:19.169 "name": "raid_bdev1", 00:24:19.169 "uuid": "129b31e9-4198-4560-90a3-7d3364a213ea", 00:24:19.169 "strip_size_kb": 0, 00:24:19.169 "state": "online", 00:24:19.169 "raid_level": "raid1", 00:24:19.169 "superblock": true, 00:24:19.169 "num_base_bdevs": 4, 00:24:19.169 "num_base_bdevs_discovered": 4, 00:24:19.169 "num_base_bdevs_operational": 4, 00:24:19.169 "process": { 00:24:19.169 "type": "rebuild", 00:24:19.169 "target": "spare", 00:24:19.169 "progress": { 00:24:19.169 "blocks": 20480, 00:24:19.169 "percent": 32 00:24:19.169 } 00:24:19.170 }, 00:24:19.170 "base_bdevs_list": [ 00:24:19.170 { 00:24:19.170 "name": "spare", 00:24:19.170 "uuid": "b2f2cc00-122e-55cc-be42-9a9077fabe17", 00:24:19.170 "is_configured": true, 00:24:19.170 "data_offset": 2048, 00:24:19.170 "data_size": 63488 00:24:19.170 }, 00:24:19.170 { 00:24:19.170 "name": "BaseBdev2", 00:24:19.170 "uuid": "09070ead-b0a8-508e-821e-cce4cd16d8dc", 00:24:19.170 "is_configured": true, 00:24:19.170 "data_offset": 2048, 00:24:19.170 "data_size": 63488 00:24:19.170 }, 00:24:19.170 { 00:24:19.170 "name": "BaseBdev3", 00:24:19.170 "uuid": "0152feb4-cb0c-5bfc-a805-2d1b38846375", 00:24:19.170 "is_configured": true, 00:24:19.170 "data_offset": 2048, 00:24:19.170 "data_size": 63488 00:24:19.170 }, 00:24:19.170 { 00:24:19.170 "name": "BaseBdev4", 00:24:19.170 "uuid": "c82324c3-bb39-5598-bee4-fd1c8de7f846", 00:24:19.170 "is_configured": true, 00:24:19.170 "data_offset": 2048, 00:24:19.170 "data_size": 63488 00:24:19.170 } 00:24:19.170 ] 00:24:19.170 }' 00:24:19.508 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:19.508 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:19.508 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:19.508 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:19.508 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:19.508 12:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.508 12:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.508 [2024-12-05 12:56:01.820083] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:19.508 [2024-12-05 12:56:01.915205] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:19.508 [2024-12-05 12:56:01.915268] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:19.508 [2024-12-05 12:56:01.915282] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:19.508 [2024-12-05 12:56:01.915290] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:19.508 12:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.508 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:19.508 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:19.508 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:19.508 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:19.508 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:19.508 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:19.508 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:19.508 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:19.508 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:19.508 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:19.508 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:19.508 12:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.508 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:19.508 12:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.508 12:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.508 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:19.508 "name": "raid_bdev1", 00:24:19.508 "uuid": "129b31e9-4198-4560-90a3-7d3364a213ea", 00:24:19.508 "strip_size_kb": 0, 00:24:19.508 "state": "online", 00:24:19.508 "raid_level": "raid1", 00:24:19.508 "superblock": true, 00:24:19.508 "num_base_bdevs": 4, 00:24:19.508 "num_base_bdevs_discovered": 3, 00:24:19.508 "num_base_bdevs_operational": 3, 00:24:19.508 "base_bdevs_list": [ 00:24:19.508 { 00:24:19.508 "name": null, 00:24:19.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.508 "is_configured": false, 00:24:19.508 "data_offset": 0, 00:24:19.508 "data_size": 63488 00:24:19.508 }, 00:24:19.508 { 00:24:19.508 "name": "BaseBdev2", 00:24:19.508 "uuid": "09070ead-b0a8-508e-821e-cce4cd16d8dc", 00:24:19.508 "is_configured": true, 00:24:19.508 "data_offset": 2048, 00:24:19.508 "data_size": 63488 00:24:19.508 }, 00:24:19.508 { 00:24:19.509 "name": "BaseBdev3", 00:24:19.509 "uuid": "0152feb4-cb0c-5bfc-a805-2d1b38846375", 00:24:19.509 "is_configured": true, 00:24:19.509 "data_offset": 2048, 00:24:19.509 "data_size": 63488 00:24:19.509 }, 00:24:19.509 { 00:24:19.509 "name": "BaseBdev4", 00:24:19.509 "uuid": "c82324c3-bb39-5598-bee4-fd1c8de7f846", 00:24:19.509 "is_configured": true, 00:24:19.509 "data_offset": 2048, 00:24:19.509 "data_size": 63488 00:24:19.509 } 00:24:19.509 ] 00:24:19.509 }' 00:24:19.509 12:56:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:19.509 12:56:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.768 12:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:19.768 12:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:19.768 12:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:19.768 12:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:19.768 12:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:19.768 12:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:19.768 12:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:19.768 12:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.768 12:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.768 12:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.768 12:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:19.768 "name": "raid_bdev1", 00:24:19.768 "uuid": "129b31e9-4198-4560-90a3-7d3364a213ea", 00:24:19.768 "strip_size_kb": 0, 00:24:19.768 "state": "online", 00:24:19.768 "raid_level": "raid1", 00:24:19.768 "superblock": true, 00:24:19.768 "num_base_bdevs": 4, 00:24:19.768 "num_base_bdevs_discovered": 3, 00:24:19.768 "num_base_bdevs_operational": 3, 00:24:19.768 "base_bdevs_list": [ 00:24:19.768 { 00:24:19.768 "name": null, 00:24:19.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.768 "is_configured": false, 00:24:19.768 "data_offset": 0, 00:24:19.768 "data_size": 63488 00:24:19.768 }, 00:24:19.768 { 00:24:19.768 "name": "BaseBdev2", 00:24:19.768 "uuid": "09070ead-b0a8-508e-821e-cce4cd16d8dc", 00:24:19.768 "is_configured": true, 00:24:19.768 "data_offset": 2048, 00:24:19.768 "data_size": 63488 00:24:19.768 }, 00:24:19.768 { 00:24:19.768 "name": "BaseBdev3", 00:24:19.768 "uuid": "0152feb4-cb0c-5bfc-a805-2d1b38846375", 00:24:19.768 "is_configured": true, 00:24:19.768 "data_offset": 2048, 00:24:19.768 "data_size": 63488 00:24:19.768 }, 00:24:19.768 { 00:24:19.768 "name": "BaseBdev4", 00:24:19.768 "uuid": "c82324c3-bb39-5598-bee4-fd1c8de7f846", 00:24:19.768 "is_configured": true, 00:24:19.768 "data_offset": 2048, 00:24:19.768 "data_size": 63488 00:24:19.768 } 00:24:19.768 ] 00:24:19.768 }' 00:24:19.768 12:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:19.768 12:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:19.768 12:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:19.768 12:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:19.768 12:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:19.768 12:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.768 12:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.768 [2024-12-05 12:56:02.351365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:20.026 [2024-12-05 12:56:02.358981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:24:20.026 12:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.026 12:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:24:20.026 [2024-12-05 12:56:02.360614] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:20.961 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:20.961 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:20.961 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:20.961 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:20.961 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:20.961 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:20.961 12:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.961 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:20.961 12:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:20.961 12:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.961 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:20.961 "name": "raid_bdev1", 00:24:20.961 "uuid": "129b31e9-4198-4560-90a3-7d3364a213ea", 00:24:20.961 "strip_size_kb": 0, 00:24:20.961 "state": "online", 00:24:20.961 "raid_level": "raid1", 00:24:20.961 "superblock": true, 00:24:20.961 "num_base_bdevs": 4, 00:24:20.961 "num_base_bdevs_discovered": 4, 00:24:20.961 "num_base_bdevs_operational": 4, 00:24:20.961 "process": { 00:24:20.961 "type": "rebuild", 00:24:20.961 "target": "spare", 00:24:20.961 "progress": { 00:24:20.961 "blocks": 20480, 00:24:20.961 "percent": 32 00:24:20.961 } 00:24:20.961 }, 00:24:20.961 "base_bdevs_list": [ 00:24:20.961 { 00:24:20.961 "name": "spare", 00:24:20.961 "uuid": "b2f2cc00-122e-55cc-be42-9a9077fabe17", 00:24:20.961 "is_configured": true, 00:24:20.961 "data_offset": 2048, 00:24:20.961 "data_size": 63488 00:24:20.961 }, 00:24:20.961 { 00:24:20.961 "name": "BaseBdev2", 00:24:20.961 "uuid": "09070ead-b0a8-508e-821e-cce4cd16d8dc", 00:24:20.961 "is_configured": true, 00:24:20.961 "data_offset": 2048, 00:24:20.961 "data_size": 63488 00:24:20.961 }, 00:24:20.961 { 00:24:20.961 "name": "BaseBdev3", 00:24:20.961 "uuid": "0152feb4-cb0c-5bfc-a805-2d1b38846375", 00:24:20.961 "is_configured": true, 00:24:20.961 "data_offset": 2048, 00:24:20.961 "data_size": 63488 00:24:20.961 }, 00:24:20.961 { 00:24:20.961 "name": "BaseBdev4", 00:24:20.961 "uuid": "c82324c3-bb39-5598-bee4-fd1c8de7f846", 00:24:20.961 "is_configured": true, 00:24:20.961 "data_offset": 2048, 00:24:20.961 "data_size": 63488 00:24:20.961 } 00:24:20.961 ] 00:24:20.961 }' 00:24:20.961 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:20.961 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:20.961 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:20.961 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:20.961 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:24:20.961 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:24:20.961 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:24:20.961 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:24:20.961 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:24:20.961 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:24:20.961 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:24:20.961 12:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.962 12:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:20.962 [2024-12-05 12:56:03.470864] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:21.221 [2024-12-05 12:56:03.666216] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:24:21.221 12:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.221 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:24:21.221 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:24:21.221 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:21.221 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:21.221 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:21.221 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:21.221 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:21.221 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:21.221 12:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.221 12:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:21.221 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:21.221 12:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.221 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:21.221 "name": "raid_bdev1", 00:24:21.221 "uuid": "129b31e9-4198-4560-90a3-7d3364a213ea", 00:24:21.221 "strip_size_kb": 0, 00:24:21.221 "state": "online", 00:24:21.221 "raid_level": "raid1", 00:24:21.221 "superblock": true, 00:24:21.221 "num_base_bdevs": 4, 00:24:21.221 "num_base_bdevs_discovered": 3, 00:24:21.221 "num_base_bdevs_operational": 3, 00:24:21.221 "process": { 00:24:21.221 "type": "rebuild", 00:24:21.221 "target": "spare", 00:24:21.221 "progress": { 00:24:21.221 "blocks": 24576, 00:24:21.221 "percent": 38 00:24:21.221 } 00:24:21.221 }, 00:24:21.221 "base_bdevs_list": [ 00:24:21.221 { 00:24:21.221 "name": "spare", 00:24:21.221 "uuid": "b2f2cc00-122e-55cc-be42-9a9077fabe17", 00:24:21.221 "is_configured": true, 00:24:21.221 "data_offset": 2048, 00:24:21.221 "data_size": 63488 00:24:21.221 }, 00:24:21.221 { 00:24:21.221 "name": null, 00:24:21.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.221 "is_configured": false, 00:24:21.221 "data_offset": 0, 00:24:21.221 "data_size": 63488 00:24:21.221 }, 00:24:21.221 { 00:24:21.221 "name": "BaseBdev3", 00:24:21.221 "uuid": "0152feb4-cb0c-5bfc-a805-2d1b38846375", 00:24:21.221 "is_configured": true, 00:24:21.221 "data_offset": 2048, 00:24:21.221 "data_size": 63488 00:24:21.221 }, 00:24:21.221 { 00:24:21.221 "name": "BaseBdev4", 00:24:21.221 "uuid": "c82324c3-bb39-5598-bee4-fd1c8de7f846", 00:24:21.221 "is_configured": true, 00:24:21.221 "data_offset": 2048, 00:24:21.221 "data_size": 63488 00:24:21.221 } 00:24:21.221 ] 00:24:21.221 }' 00:24:21.221 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:21.221 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:21.221 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:21.221 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:21.221 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=356 00:24:21.221 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:21.221 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:21.221 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:21.221 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:21.221 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:21.221 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:21.221 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:21.221 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:21.221 12:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.221 12:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:21.221 12:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.480 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:21.480 "name": "raid_bdev1", 00:24:21.480 "uuid": "129b31e9-4198-4560-90a3-7d3364a213ea", 00:24:21.480 "strip_size_kb": 0, 00:24:21.480 "state": "online", 00:24:21.480 "raid_level": "raid1", 00:24:21.480 "superblock": true, 00:24:21.480 "num_base_bdevs": 4, 00:24:21.480 "num_base_bdevs_discovered": 3, 00:24:21.480 "num_base_bdevs_operational": 3, 00:24:21.480 "process": { 00:24:21.480 "type": "rebuild", 00:24:21.480 "target": "spare", 00:24:21.480 "progress": { 00:24:21.480 "blocks": 26624, 00:24:21.480 "percent": 41 00:24:21.480 } 00:24:21.480 }, 00:24:21.480 "base_bdevs_list": [ 00:24:21.480 { 00:24:21.480 "name": "spare", 00:24:21.480 "uuid": "b2f2cc00-122e-55cc-be42-9a9077fabe17", 00:24:21.480 "is_configured": true, 00:24:21.480 "data_offset": 2048, 00:24:21.480 "data_size": 63488 00:24:21.480 }, 00:24:21.480 { 00:24:21.480 "name": null, 00:24:21.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.480 "is_configured": false, 00:24:21.480 "data_offset": 0, 00:24:21.480 "data_size": 63488 00:24:21.480 }, 00:24:21.480 { 00:24:21.480 "name": "BaseBdev3", 00:24:21.480 "uuid": "0152feb4-cb0c-5bfc-a805-2d1b38846375", 00:24:21.480 "is_configured": true, 00:24:21.480 "data_offset": 2048, 00:24:21.480 "data_size": 63488 00:24:21.480 }, 00:24:21.480 { 00:24:21.480 "name": "BaseBdev4", 00:24:21.480 "uuid": "c82324c3-bb39-5598-bee4-fd1c8de7f846", 00:24:21.480 "is_configured": true, 00:24:21.480 "data_offset": 2048, 00:24:21.480 "data_size": 63488 00:24:21.480 } 00:24:21.480 ] 00:24:21.480 }' 00:24:21.480 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:21.480 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:21.480 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:21.480 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:21.480 12:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:22.413 12:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:22.413 12:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:22.413 12:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:22.413 12:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:22.413 12:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:22.413 12:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:22.413 12:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:22.413 12:56:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.413 12:56:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:22.413 12:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:22.413 12:56:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.413 12:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:22.413 "name": "raid_bdev1", 00:24:22.413 "uuid": "129b31e9-4198-4560-90a3-7d3364a213ea", 00:24:22.413 "strip_size_kb": 0, 00:24:22.413 "state": "online", 00:24:22.413 "raid_level": "raid1", 00:24:22.413 "superblock": true, 00:24:22.413 "num_base_bdevs": 4, 00:24:22.413 "num_base_bdevs_discovered": 3, 00:24:22.413 "num_base_bdevs_operational": 3, 00:24:22.413 "process": { 00:24:22.413 "type": "rebuild", 00:24:22.413 "target": "spare", 00:24:22.413 "progress": { 00:24:22.413 "blocks": 49152, 00:24:22.413 "percent": 77 00:24:22.413 } 00:24:22.413 }, 00:24:22.413 "base_bdevs_list": [ 00:24:22.413 { 00:24:22.413 "name": "spare", 00:24:22.413 "uuid": "b2f2cc00-122e-55cc-be42-9a9077fabe17", 00:24:22.413 "is_configured": true, 00:24:22.413 "data_offset": 2048, 00:24:22.413 "data_size": 63488 00:24:22.413 }, 00:24:22.413 { 00:24:22.413 "name": null, 00:24:22.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:22.413 "is_configured": false, 00:24:22.413 "data_offset": 0, 00:24:22.413 "data_size": 63488 00:24:22.413 }, 00:24:22.413 { 00:24:22.413 "name": "BaseBdev3", 00:24:22.413 "uuid": "0152feb4-cb0c-5bfc-a805-2d1b38846375", 00:24:22.413 "is_configured": true, 00:24:22.413 "data_offset": 2048, 00:24:22.413 "data_size": 63488 00:24:22.413 }, 00:24:22.413 { 00:24:22.413 "name": "BaseBdev4", 00:24:22.413 "uuid": "c82324c3-bb39-5598-bee4-fd1c8de7f846", 00:24:22.413 "is_configured": true, 00:24:22.413 "data_offset": 2048, 00:24:22.413 "data_size": 63488 00:24:22.413 } 00:24:22.413 ] 00:24:22.413 }' 00:24:22.413 12:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:22.413 12:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:22.413 12:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:22.413 12:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:22.413 12:56:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:22.994 [2024-12-05 12:56:05.574981] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:22.994 [2024-12-05 12:56:05.575048] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:22.994 [2024-12-05 12:56:05.575157] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:23.557 12:56:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:23.557 12:56:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:23.557 12:56:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:23.557 12:56:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:23.557 12:56:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:23.557 12:56:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:23.557 12:56:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:23.557 12:56:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:23.557 12:56:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.557 12:56:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:23.557 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.557 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:23.557 "name": "raid_bdev1", 00:24:23.557 "uuid": "129b31e9-4198-4560-90a3-7d3364a213ea", 00:24:23.557 "strip_size_kb": 0, 00:24:23.557 "state": "online", 00:24:23.557 "raid_level": "raid1", 00:24:23.557 "superblock": true, 00:24:23.557 "num_base_bdevs": 4, 00:24:23.557 "num_base_bdevs_discovered": 3, 00:24:23.557 "num_base_bdevs_operational": 3, 00:24:23.557 "base_bdevs_list": [ 00:24:23.557 { 00:24:23.557 "name": "spare", 00:24:23.557 "uuid": "b2f2cc00-122e-55cc-be42-9a9077fabe17", 00:24:23.557 "is_configured": true, 00:24:23.557 "data_offset": 2048, 00:24:23.557 "data_size": 63488 00:24:23.557 }, 00:24:23.557 { 00:24:23.557 "name": null, 00:24:23.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.557 "is_configured": false, 00:24:23.557 "data_offset": 0, 00:24:23.557 "data_size": 63488 00:24:23.557 }, 00:24:23.557 { 00:24:23.557 "name": "BaseBdev3", 00:24:23.557 "uuid": "0152feb4-cb0c-5bfc-a805-2d1b38846375", 00:24:23.557 "is_configured": true, 00:24:23.557 "data_offset": 2048, 00:24:23.557 "data_size": 63488 00:24:23.557 }, 00:24:23.557 { 00:24:23.557 "name": "BaseBdev4", 00:24:23.557 "uuid": "c82324c3-bb39-5598-bee4-fd1c8de7f846", 00:24:23.557 "is_configured": true, 00:24:23.557 "data_offset": 2048, 00:24:23.557 "data_size": 63488 00:24:23.557 } 00:24:23.557 ] 00:24:23.557 }' 00:24:23.557 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:23.557 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:23.557 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:23.557 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:24:23.557 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:24:23.557 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:23.557 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:23.557 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:23.557 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:23.557 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:23.557 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:23.557 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.557 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:23.557 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:23.557 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.557 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:23.557 "name": "raid_bdev1", 00:24:23.557 "uuid": "129b31e9-4198-4560-90a3-7d3364a213ea", 00:24:23.557 "strip_size_kb": 0, 00:24:23.557 "state": "online", 00:24:23.557 "raid_level": "raid1", 00:24:23.557 "superblock": true, 00:24:23.557 "num_base_bdevs": 4, 00:24:23.557 "num_base_bdevs_discovered": 3, 00:24:23.557 "num_base_bdevs_operational": 3, 00:24:23.557 "base_bdevs_list": [ 00:24:23.557 { 00:24:23.557 "name": "spare", 00:24:23.557 "uuid": "b2f2cc00-122e-55cc-be42-9a9077fabe17", 00:24:23.557 "is_configured": true, 00:24:23.557 "data_offset": 2048, 00:24:23.557 "data_size": 63488 00:24:23.557 }, 00:24:23.557 { 00:24:23.557 "name": null, 00:24:23.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.557 "is_configured": false, 00:24:23.557 "data_offset": 0, 00:24:23.557 "data_size": 63488 00:24:23.557 }, 00:24:23.557 { 00:24:23.557 "name": "BaseBdev3", 00:24:23.557 "uuid": "0152feb4-cb0c-5bfc-a805-2d1b38846375", 00:24:23.557 "is_configured": true, 00:24:23.557 "data_offset": 2048, 00:24:23.557 "data_size": 63488 00:24:23.557 }, 00:24:23.557 { 00:24:23.557 "name": "BaseBdev4", 00:24:23.557 "uuid": "c82324c3-bb39-5598-bee4-fd1c8de7f846", 00:24:23.557 "is_configured": true, 00:24:23.557 "data_offset": 2048, 00:24:23.557 "data_size": 63488 00:24:23.557 } 00:24:23.557 ] 00:24:23.557 }' 00:24:23.557 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:23.814 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:23.814 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:23.814 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:23.814 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:23.814 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:23.814 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:23.814 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:23.814 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:23.814 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:23.814 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:23.814 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:23.814 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:23.814 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:23.814 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:23.814 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:23.814 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.814 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:23.814 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.814 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:23.814 "name": "raid_bdev1", 00:24:23.814 "uuid": "129b31e9-4198-4560-90a3-7d3364a213ea", 00:24:23.814 "strip_size_kb": 0, 00:24:23.814 "state": "online", 00:24:23.814 "raid_level": "raid1", 00:24:23.814 "superblock": true, 00:24:23.814 "num_base_bdevs": 4, 00:24:23.814 "num_base_bdevs_discovered": 3, 00:24:23.814 "num_base_bdevs_operational": 3, 00:24:23.814 "base_bdevs_list": [ 00:24:23.814 { 00:24:23.814 "name": "spare", 00:24:23.814 "uuid": "b2f2cc00-122e-55cc-be42-9a9077fabe17", 00:24:23.814 "is_configured": true, 00:24:23.814 "data_offset": 2048, 00:24:23.814 "data_size": 63488 00:24:23.814 }, 00:24:23.814 { 00:24:23.814 "name": null, 00:24:23.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.814 "is_configured": false, 00:24:23.814 "data_offset": 0, 00:24:23.814 "data_size": 63488 00:24:23.814 }, 00:24:23.814 { 00:24:23.814 "name": "BaseBdev3", 00:24:23.814 "uuid": "0152feb4-cb0c-5bfc-a805-2d1b38846375", 00:24:23.814 "is_configured": true, 00:24:23.814 "data_offset": 2048, 00:24:23.814 "data_size": 63488 00:24:23.814 }, 00:24:23.814 { 00:24:23.814 "name": "BaseBdev4", 00:24:23.814 "uuid": "c82324c3-bb39-5598-bee4-fd1c8de7f846", 00:24:23.814 "is_configured": true, 00:24:23.814 "data_offset": 2048, 00:24:23.814 "data_size": 63488 00:24:23.814 } 00:24:23.814 ] 00:24:23.814 }' 00:24:23.814 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:23.814 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:24.071 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:24.071 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.071 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:24.071 [2024-12-05 12:56:06.475367] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:24.071 [2024-12-05 12:56:06.475506] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:24.071 [2024-12-05 12:56:06.475581] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:24.071 [2024-12-05 12:56:06.475648] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:24.071 [2024-12-05 12:56:06.475657] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:24.071 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.071 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:24:24.071 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:24.071 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.071 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:24.071 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.071 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:24:24.071 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:24.071 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:24:24.071 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:24.071 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:24.071 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:24.071 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:24.071 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:24.071 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:24.071 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:24:24.071 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:24.071 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:24.071 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:24.329 /dev/nbd0 00:24:24.329 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:24.329 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:24.329 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:24.329 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:24:24.329 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:24.329 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:24.329 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:24.329 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:24:24.329 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:24.329 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:24.329 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:24.329 1+0 records in 00:24:24.329 1+0 records out 00:24:24.329 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325542 s, 12.6 MB/s 00:24:24.329 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:24.329 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:24:24.329 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:24.329 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:24.329 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:24:24.329 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:24.329 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:24.329 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:24:24.621 /dev/nbd1 00:24:24.621 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:24.621 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:24.621 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:24:24.621 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:24:24.621 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:24.621 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:24.621 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:24:24.621 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:24:24.621 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:24.621 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:24.621 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:24.621 1+0 records in 00:24:24.621 1+0 records out 00:24:24.621 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273707 s, 15.0 MB/s 00:24:24.621 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:24.621 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:24:24.621 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:24.621 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:24.621 12:56:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:24:24.621 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:24.621 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:24.621 12:56:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:24.621 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:24:24.621 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:24.621 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:24.621 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:24.621 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:24:24.621 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:24.621 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:24.879 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:24.879 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:24.879 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:24.879 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:24.879 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:24.879 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:24.879 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:24.879 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:24.879 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:24.879 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.136 [2024-12-05 12:56:07.523013] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:25.136 [2024-12-05 12:56:07.523602] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:25.136 [2024-12-05 12:56:07.523634] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:24:25.136 [2024-12-05 12:56:07.523643] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:25.136 [2024-12-05 12:56:07.525559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:25.136 [2024-12-05 12:56:07.525589] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:25.136 [2024-12-05 12:56:07.525677] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:25.136 [2024-12-05 12:56:07.525715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:25.136 [2024-12-05 12:56:07.525826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:25.136 [2024-12-05 12:56:07.525901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:25.136 spare 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.136 [2024-12-05 12:56:07.625982] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:25.136 [2024-12-05 12:56:07.626017] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:25.136 [2024-12-05 12:56:07.626284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:24:25.136 [2024-12-05 12:56:07.626440] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:25.136 [2024-12-05 12:56:07.626448] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:24:25.136 [2024-12-05 12:56:07.626617] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:25.136 "name": "raid_bdev1", 00:24:25.136 "uuid": "129b31e9-4198-4560-90a3-7d3364a213ea", 00:24:25.136 "strip_size_kb": 0, 00:24:25.136 "state": "online", 00:24:25.136 "raid_level": "raid1", 00:24:25.136 "superblock": true, 00:24:25.136 "num_base_bdevs": 4, 00:24:25.136 "num_base_bdevs_discovered": 3, 00:24:25.136 "num_base_bdevs_operational": 3, 00:24:25.136 "base_bdevs_list": [ 00:24:25.136 { 00:24:25.136 "name": "spare", 00:24:25.136 "uuid": "b2f2cc00-122e-55cc-be42-9a9077fabe17", 00:24:25.136 "is_configured": true, 00:24:25.136 "data_offset": 2048, 00:24:25.136 "data_size": 63488 00:24:25.136 }, 00:24:25.136 { 00:24:25.136 "name": null, 00:24:25.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.136 "is_configured": false, 00:24:25.136 "data_offset": 2048, 00:24:25.136 "data_size": 63488 00:24:25.136 }, 00:24:25.136 { 00:24:25.136 "name": "BaseBdev3", 00:24:25.136 "uuid": "0152feb4-cb0c-5bfc-a805-2d1b38846375", 00:24:25.136 "is_configured": true, 00:24:25.136 "data_offset": 2048, 00:24:25.136 "data_size": 63488 00:24:25.136 }, 00:24:25.136 { 00:24:25.136 "name": "BaseBdev4", 00:24:25.136 "uuid": "c82324c3-bb39-5598-bee4-fd1c8de7f846", 00:24:25.136 "is_configured": true, 00:24:25.136 "data_offset": 2048, 00:24:25.136 "data_size": 63488 00:24:25.136 } 00:24:25.136 ] 00:24:25.136 }' 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:25.136 12:56:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.393 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:25.393 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:25.393 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:25.393 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:25.393 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:25.393 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:25.393 12:56:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.393 12:56:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.393 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:25.650 12:56:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.650 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:25.650 "name": "raid_bdev1", 00:24:25.650 "uuid": "129b31e9-4198-4560-90a3-7d3364a213ea", 00:24:25.650 "strip_size_kb": 0, 00:24:25.650 "state": "online", 00:24:25.650 "raid_level": "raid1", 00:24:25.650 "superblock": true, 00:24:25.650 "num_base_bdevs": 4, 00:24:25.650 "num_base_bdevs_discovered": 3, 00:24:25.650 "num_base_bdevs_operational": 3, 00:24:25.650 "base_bdevs_list": [ 00:24:25.650 { 00:24:25.650 "name": "spare", 00:24:25.650 "uuid": "b2f2cc00-122e-55cc-be42-9a9077fabe17", 00:24:25.650 "is_configured": true, 00:24:25.650 "data_offset": 2048, 00:24:25.650 "data_size": 63488 00:24:25.650 }, 00:24:25.650 { 00:24:25.650 "name": null, 00:24:25.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.650 "is_configured": false, 00:24:25.650 "data_offset": 2048, 00:24:25.650 "data_size": 63488 00:24:25.650 }, 00:24:25.650 { 00:24:25.650 "name": "BaseBdev3", 00:24:25.650 "uuid": "0152feb4-cb0c-5bfc-a805-2d1b38846375", 00:24:25.650 "is_configured": true, 00:24:25.650 "data_offset": 2048, 00:24:25.650 "data_size": 63488 00:24:25.650 }, 00:24:25.650 { 00:24:25.650 "name": "BaseBdev4", 00:24:25.650 "uuid": "c82324c3-bb39-5598-bee4-fd1c8de7f846", 00:24:25.650 "is_configured": true, 00:24:25.650 "data_offset": 2048, 00:24:25.650 "data_size": 63488 00:24:25.650 } 00:24:25.650 ] 00:24:25.650 }' 00:24:25.650 12:56:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:25.650 12:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:25.650 12:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:25.650 12:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:25.650 12:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:25.650 12:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:25.650 12:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.650 12:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.650 12:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.650 12:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:24:25.650 12:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:25.650 12:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.650 12:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.650 [2024-12-05 12:56:08.107166] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:25.650 12:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.650 12:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:25.650 12:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:25.650 12:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:25.650 12:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:25.650 12:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:25.650 12:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:25.650 12:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:25.650 12:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:25.651 12:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:25.651 12:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:25.651 12:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:25.651 12:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.651 12:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.651 12:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:25.651 12:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.651 12:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:25.651 "name": "raid_bdev1", 00:24:25.651 "uuid": "129b31e9-4198-4560-90a3-7d3364a213ea", 00:24:25.651 "strip_size_kb": 0, 00:24:25.651 "state": "online", 00:24:25.651 "raid_level": "raid1", 00:24:25.651 "superblock": true, 00:24:25.651 "num_base_bdevs": 4, 00:24:25.651 "num_base_bdevs_discovered": 2, 00:24:25.651 "num_base_bdevs_operational": 2, 00:24:25.651 "base_bdevs_list": [ 00:24:25.651 { 00:24:25.651 "name": null, 00:24:25.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.651 "is_configured": false, 00:24:25.651 "data_offset": 0, 00:24:25.651 "data_size": 63488 00:24:25.651 }, 00:24:25.651 { 00:24:25.651 "name": null, 00:24:25.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.651 "is_configured": false, 00:24:25.651 "data_offset": 2048, 00:24:25.651 "data_size": 63488 00:24:25.651 }, 00:24:25.651 { 00:24:25.651 "name": "BaseBdev3", 00:24:25.651 "uuid": "0152feb4-cb0c-5bfc-a805-2d1b38846375", 00:24:25.651 "is_configured": true, 00:24:25.651 "data_offset": 2048, 00:24:25.651 "data_size": 63488 00:24:25.651 }, 00:24:25.651 { 00:24:25.651 "name": "BaseBdev4", 00:24:25.651 "uuid": "c82324c3-bb39-5598-bee4-fd1c8de7f846", 00:24:25.651 "is_configured": true, 00:24:25.651 "data_offset": 2048, 00:24:25.651 "data_size": 63488 00:24:25.651 } 00:24:25.651 ] 00:24:25.651 }' 00:24:25.651 12:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:25.651 12:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.908 12:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:25.908 12:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.908 12:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.908 [2024-12-05 12:56:08.435240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:25.908 [2024-12-05 12:56:08.435390] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:24:25.908 [2024-12-05 12:56:08.435402] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:25.908 [2024-12-05 12:56:08.435435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:25.908 [2024-12-05 12:56:08.442973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:24:25.908 12:56:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.908 12:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:24:25.908 [2024-12-05 12:56:08.444555] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:27.277 12:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:27.277 12:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:27.278 12:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:27.278 12:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:27.278 12:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:27.278 12:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:27.278 12:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:27.278 12:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.278 12:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.278 12:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.278 12:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:27.278 "name": "raid_bdev1", 00:24:27.278 "uuid": "129b31e9-4198-4560-90a3-7d3364a213ea", 00:24:27.278 "strip_size_kb": 0, 00:24:27.278 "state": "online", 00:24:27.278 "raid_level": "raid1", 00:24:27.278 "superblock": true, 00:24:27.278 "num_base_bdevs": 4, 00:24:27.278 "num_base_bdevs_discovered": 3, 00:24:27.278 "num_base_bdevs_operational": 3, 00:24:27.278 "process": { 00:24:27.278 "type": "rebuild", 00:24:27.278 "target": "spare", 00:24:27.278 "progress": { 00:24:27.278 "blocks": 20480, 00:24:27.278 "percent": 32 00:24:27.278 } 00:24:27.278 }, 00:24:27.278 "base_bdevs_list": [ 00:24:27.278 { 00:24:27.278 "name": "spare", 00:24:27.278 "uuid": "b2f2cc00-122e-55cc-be42-9a9077fabe17", 00:24:27.278 "is_configured": true, 00:24:27.278 "data_offset": 2048, 00:24:27.278 "data_size": 63488 00:24:27.278 }, 00:24:27.278 { 00:24:27.278 "name": null, 00:24:27.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.278 "is_configured": false, 00:24:27.278 "data_offset": 2048, 00:24:27.278 "data_size": 63488 00:24:27.278 }, 00:24:27.278 { 00:24:27.278 "name": "BaseBdev3", 00:24:27.278 "uuid": "0152feb4-cb0c-5bfc-a805-2d1b38846375", 00:24:27.278 "is_configured": true, 00:24:27.278 "data_offset": 2048, 00:24:27.278 "data_size": 63488 00:24:27.278 }, 00:24:27.278 { 00:24:27.278 "name": "BaseBdev4", 00:24:27.278 "uuid": "c82324c3-bb39-5598-bee4-fd1c8de7f846", 00:24:27.278 "is_configured": true, 00:24:27.278 "data_offset": 2048, 00:24:27.278 "data_size": 63488 00:24:27.278 } 00:24:27.278 ] 00:24:27.278 }' 00:24:27.278 12:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:27.278 12:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:27.278 12:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:27.278 12:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:27.278 12:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:24:27.278 12:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.278 12:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.278 [2024-12-05 12:56:09.554787] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:27.278 [2024-12-05 12:56:09.649831] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:27.278 [2024-12-05 12:56:09.650041] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:27.278 [2024-12-05 12:56:09.650060] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:27.278 [2024-12-05 12:56:09.650067] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:27.278 12:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.278 12:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:27.278 12:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:27.278 12:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:27.278 12:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:27.278 12:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:27.278 12:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:27.278 12:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:27.278 12:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:27.278 12:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:27.278 12:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:27.278 12:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:27.278 12:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.278 12:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:27.278 12:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.278 12:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.278 12:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:27.278 "name": "raid_bdev1", 00:24:27.278 "uuid": "129b31e9-4198-4560-90a3-7d3364a213ea", 00:24:27.278 "strip_size_kb": 0, 00:24:27.278 "state": "online", 00:24:27.278 "raid_level": "raid1", 00:24:27.278 "superblock": true, 00:24:27.278 "num_base_bdevs": 4, 00:24:27.278 "num_base_bdevs_discovered": 2, 00:24:27.278 "num_base_bdevs_operational": 2, 00:24:27.278 "base_bdevs_list": [ 00:24:27.278 { 00:24:27.278 "name": null, 00:24:27.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.278 "is_configured": false, 00:24:27.278 "data_offset": 0, 00:24:27.278 "data_size": 63488 00:24:27.278 }, 00:24:27.278 { 00:24:27.278 "name": null, 00:24:27.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.278 "is_configured": false, 00:24:27.278 "data_offset": 2048, 00:24:27.278 "data_size": 63488 00:24:27.278 }, 00:24:27.278 { 00:24:27.278 "name": "BaseBdev3", 00:24:27.278 "uuid": "0152feb4-cb0c-5bfc-a805-2d1b38846375", 00:24:27.278 "is_configured": true, 00:24:27.278 "data_offset": 2048, 00:24:27.278 "data_size": 63488 00:24:27.278 }, 00:24:27.278 { 00:24:27.278 "name": "BaseBdev4", 00:24:27.278 "uuid": "c82324c3-bb39-5598-bee4-fd1c8de7f846", 00:24:27.278 "is_configured": true, 00:24:27.278 "data_offset": 2048, 00:24:27.278 "data_size": 63488 00:24:27.278 } 00:24:27.278 ] 00:24:27.278 }' 00:24:27.278 12:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:27.278 12:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.592 12:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:27.592 12:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.592 12:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.592 [2024-12-05 12:56:09.982211] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:27.592 [2024-12-05 12:56:09.982265] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:27.592 [2024-12-05 12:56:09.982290] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:24:27.592 [2024-12-05 12:56:09.982298] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:27.592 [2024-12-05 12:56:09.982688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:27.592 [2024-12-05 12:56:09.982701] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:27.592 [2024-12-05 12:56:09.982774] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:27.593 [2024-12-05 12:56:09.982787] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:24:27.593 [2024-12-05 12:56:09.982800] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:27.593 [2024-12-05 12:56:09.982815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:27.593 [2024-12-05 12:56:09.990447] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:24:27.593 spare 00:24:27.593 12:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.593 12:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:24:27.593 [2024-12-05 12:56:09.992018] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:28.526 12:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:28.526 12:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:28.526 12:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:28.526 12:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:28.526 12:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:28.526 12:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:28.526 12:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:28.526 12:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.527 12:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.527 12:56:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.527 12:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:28.527 "name": "raid_bdev1", 00:24:28.527 "uuid": "129b31e9-4198-4560-90a3-7d3364a213ea", 00:24:28.527 "strip_size_kb": 0, 00:24:28.527 "state": "online", 00:24:28.527 "raid_level": "raid1", 00:24:28.527 "superblock": true, 00:24:28.527 "num_base_bdevs": 4, 00:24:28.527 "num_base_bdevs_discovered": 3, 00:24:28.527 "num_base_bdevs_operational": 3, 00:24:28.527 "process": { 00:24:28.527 "type": "rebuild", 00:24:28.527 "target": "spare", 00:24:28.527 "progress": { 00:24:28.527 "blocks": 20480, 00:24:28.527 "percent": 32 00:24:28.527 } 00:24:28.527 }, 00:24:28.527 "base_bdevs_list": [ 00:24:28.527 { 00:24:28.527 "name": "spare", 00:24:28.527 "uuid": "b2f2cc00-122e-55cc-be42-9a9077fabe17", 00:24:28.527 "is_configured": true, 00:24:28.527 "data_offset": 2048, 00:24:28.527 "data_size": 63488 00:24:28.527 }, 00:24:28.527 { 00:24:28.527 "name": null, 00:24:28.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:28.527 "is_configured": false, 00:24:28.527 "data_offset": 2048, 00:24:28.527 "data_size": 63488 00:24:28.527 }, 00:24:28.527 { 00:24:28.527 "name": "BaseBdev3", 00:24:28.527 "uuid": "0152feb4-cb0c-5bfc-a805-2d1b38846375", 00:24:28.527 "is_configured": true, 00:24:28.527 "data_offset": 2048, 00:24:28.527 "data_size": 63488 00:24:28.527 }, 00:24:28.527 { 00:24:28.527 "name": "BaseBdev4", 00:24:28.527 "uuid": "c82324c3-bb39-5598-bee4-fd1c8de7f846", 00:24:28.527 "is_configured": true, 00:24:28.527 "data_offset": 2048, 00:24:28.527 "data_size": 63488 00:24:28.527 } 00:24:28.527 ] 00:24:28.527 }' 00:24:28.527 12:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:28.527 12:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:28.527 12:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:28.527 12:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:28.527 12:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:24:28.527 12:56:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.527 12:56:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.527 [2024-12-05 12:56:11.098346] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:28.784 [2024-12-05 12:56:11.197635] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:28.784 [2024-12-05 12:56:11.197840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:28.784 [2024-12-05 12:56:11.197899] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:28.784 [2024-12-05 12:56:11.197923] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:28.784 12:56:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.784 12:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:28.784 12:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:28.784 12:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:28.784 12:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:28.784 12:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:28.784 12:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:28.784 12:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:28.784 12:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:28.784 12:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:28.784 12:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:28.784 12:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:28.784 12:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:28.784 12:56:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.784 12:56:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.784 12:56:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.784 12:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:28.784 "name": "raid_bdev1", 00:24:28.784 "uuid": "129b31e9-4198-4560-90a3-7d3364a213ea", 00:24:28.784 "strip_size_kb": 0, 00:24:28.784 "state": "online", 00:24:28.784 "raid_level": "raid1", 00:24:28.784 "superblock": true, 00:24:28.784 "num_base_bdevs": 4, 00:24:28.784 "num_base_bdevs_discovered": 2, 00:24:28.784 "num_base_bdevs_operational": 2, 00:24:28.784 "base_bdevs_list": [ 00:24:28.784 { 00:24:28.784 "name": null, 00:24:28.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:28.785 "is_configured": false, 00:24:28.785 "data_offset": 0, 00:24:28.785 "data_size": 63488 00:24:28.785 }, 00:24:28.785 { 00:24:28.785 "name": null, 00:24:28.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:28.785 "is_configured": false, 00:24:28.785 "data_offset": 2048, 00:24:28.785 "data_size": 63488 00:24:28.785 }, 00:24:28.785 { 00:24:28.785 "name": "BaseBdev3", 00:24:28.785 "uuid": "0152feb4-cb0c-5bfc-a805-2d1b38846375", 00:24:28.785 "is_configured": true, 00:24:28.785 "data_offset": 2048, 00:24:28.785 "data_size": 63488 00:24:28.785 }, 00:24:28.785 { 00:24:28.785 "name": "BaseBdev4", 00:24:28.785 "uuid": "c82324c3-bb39-5598-bee4-fd1c8de7f846", 00:24:28.785 "is_configured": true, 00:24:28.785 "data_offset": 2048, 00:24:28.785 "data_size": 63488 00:24:28.785 } 00:24:28.785 ] 00:24:28.785 }' 00:24:28.785 12:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:28.785 12:56:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.042 12:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:29.043 12:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:29.043 12:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:29.043 12:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:29.043 12:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:29.043 12:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:29.043 12:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:29.043 12:56:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.043 12:56:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.043 12:56:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.043 12:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:29.043 "name": "raid_bdev1", 00:24:29.043 "uuid": "129b31e9-4198-4560-90a3-7d3364a213ea", 00:24:29.043 "strip_size_kb": 0, 00:24:29.043 "state": "online", 00:24:29.043 "raid_level": "raid1", 00:24:29.043 "superblock": true, 00:24:29.043 "num_base_bdevs": 4, 00:24:29.043 "num_base_bdevs_discovered": 2, 00:24:29.043 "num_base_bdevs_operational": 2, 00:24:29.043 "base_bdevs_list": [ 00:24:29.043 { 00:24:29.043 "name": null, 00:24:29.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:29.043 "is_configured": false, 00:24:29.043 "data_offset": 0, 00:24:29.043 "data_size": 63488 00:24:29.043 }, 00:24:29.043 { 00:24:29.043 "name": null, 00:24:29.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:29.043 "is_configured": false, 00:24:29.043 "data_offset": 2048, 00:24:29.043 "data_size": 63488 00:24:29.043 }, 00:24:29.043 { 00:24:29.043 "name": "BaseBdev3", 00:24:29.043 "uuid": "0152feb4-cb0c-5bfc-a805-2d1b38846375", 00:24:29.043 "is_configured": true, 00:24:29.043 "data_offset": 2048, 00:24:29.043 "data_size": 63488 00:24:29.043 }, 00:24:29.043 { 00:24:29.043 "name": "BaseBdev4", 00:24:29.043 "uuid": "c82324c3-bb39-5598-bee4-fd1c8de7f846", 00:24:29.043 "is_configured": true, 00:24:29.043 "data_offset": 2048, 00:24:29.043 "data_size": 63488 00:24:29.043 } 00:24:29.043 ] 00:24:29.043 }' 00:24:29.043 12:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:29.043 12:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:29.043 12:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:29.043 12:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:29.043 12:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:24:29.043 12:56:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.043 12:56:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.043 12:56:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.043 12:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:29.043 12:56:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.043 12:56:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.043 [2024-12-05 12:56:11.602023] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:29.043 [2024-12-05 12:56:11.602074] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:29.043 [2024-12-05 12:56:11.602089] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:24:29.043 [2024-12-05 12:56:11.602098] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:29.043 [2024-12-05 12:56:11.602447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:29.043 [2024-12-05 12:56:11.602461] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:29.043 [2024-12-05 12:56:11.602531] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:29.043 [2024-12-05 12:56:11.602544] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:24:29.043 [2024-12-05 12:56:11.602550] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:29.043 [2024-12-05 12:56:11.602570] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:24:29.043 BaseBdev1 00:24:29.043 12:56:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.043 12:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:24:30.417 12:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:30.417 12:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:30.417 12:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:30.417 12:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:30.417 12:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:30.417 12:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:30.417 12:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:30.417 12:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:30.417 12:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:30.417 12:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:30.417 12:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:30.417 12:56:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.417 12:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:30.417 12:56:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.417 12:56:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.417 12:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:30.417 "name": "raid_bdev1", 00:24:30.417 "uuid": "129b31e9-4198-4560-90a3-7d3364a213ea", 00:24:30.417 "strip_size_kb": 0, 00:24:30.417 "state": "online", 00:24:30.417 "raid_level": "raid1", 00:24:30.417 "superblock": true, 00:24:30.417 "num_base_bdevs": 4, 00:24:30.417 "num_base_bdevs_discovered": 2, 00:24:30.417 "num_base_bdevs_operational": 2, 00:24:30.417 "base_bdevs_list": [ 00:24:30.417 { 00:24:30.417 "name": null, 00:24:30.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.417 "is_configured": false, 00:24:30.417 "data_offset": 0, 00:24:30.417 "data_size": 63488 00:24:30.417 }, 00:24:30.417 { 00:24:30.417 "name": null, 00:24:30.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.417 "is_configured": false, 00:24:30.417 "data_offset": 2048, 00:24:30.417 "data_size": 63488 00:24:30.417 }, 00:24:30.417 { 00:24:30.417 "name": "BaseBdev3", 00:24:30.417 "uuid": "0152feb4-cb0c-5bfc-a805-2d1b38846375", 00:24:30.417 "is_configured": true, 00:24:30.417 "data_offset": 2048, 00:24:30.417 "data_size": 63488 00:24:30.417 }, 00:24:30.417 { 00:24:30.417 "name": "BaseBdev4", 00:24:30.417 "uuid": "c82324c3-bb39-5598-bee4-fd1c8de7f846", 00:24:30.417 "is_configured": true, 00:24:30.417 "data_offset": 2048, 00:24:30.417 "data_size": 63488 00:24:30.417 } 00:24:30.417 ] 00:24:30.417 }' 00:24:30.417 12:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:30.417 12:56:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.417 12:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:30.417 12:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:30.417 12:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:30.417 12:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:30.417 12:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:30.417 12:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:30.417 12:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:30.417 12:56:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.417 12:56:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.417 12:56:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.417 12:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:30.417 "name": "raid_bdev1", 00:24:30.417 "uuid": "129b31e9-4198-4560-90a3-7d3364a213ea", 00:24:30.417 "strip_size_kb": 0, 00:24:30.417 "state": "online", 00:24:30.417 "raid_level": "raid1", 00:24:30.417 "superblock": true, 00:24:30.417 "num_base_bdevs": 4, 00:24:30.417 "num_base_bdevs_discovered": 2, 00:24:30.417 "num_base_bdevs_operational": 2, 00:24:30.417 "base_bdevs_list": [ 00:24:30.417 { 00:24:30.417 "name": null, 00:24:30.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.417 "is_configured": false, 00:24:30.417 "data_offset": 0, 00:24:30.417 "data_size": 63488 00:24:30.417 }, 00:24:30.417 { 00:24:30.417 "name": null, 00:24:30.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.417 "is_configured": false, 00:24:30.417 "data_offset": 2048, 00:24:30.417 "data_size": 63488 00:24:30.417 }, 00:24:30.417 { 00:24:30.417 "name": "BaseBdev3", 00:24:30.417 "uuid": "0152feb4-cb0c-5bfc-a805-2d1b38846375", 00:24:30.417 "is_configured": true, 00:24:30.417 "data_offset": 2048, 00:24:30.417 "data_size": 63488 00:24:30.417 }, 00:24:30.417 { 00:24:30.417 "name": "BaseBdev4", 00:24:30.417 "uuid": "c82324c3-bb39-5598-bee4-fd1c8de7f846", 00:24:30.417 "is_configured": true, 00:24:30.417 "data_offset": 2048, 00:24:30.417 "data_size": 63488 00:24:30.417 } 00:24:30.417 ] 00:24:30.417 }' 00:24:30.417 12:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:30.676 12:56:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:30.676 12:56:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:30.676 12:56:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:30.676 12:56:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:30.676 12:56:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:24:30.676 12:56:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:30.676 12:56:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:30.676 12:56:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:30.676 12:56:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:30.676 12:56:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:30.676 12:56:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:30.676 12:56:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.676 12:56:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.676 [2024-12-05 12:56:13.050296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:30.676 [2024-12-05 12:56:13.050442] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:24:30.676 [2024-12-05 12:56:13.050453] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:30.676 request: 00:24:30.676 { 00:24:30.676 "base_bdev": "BaseBdev1", 00:24:30.676 "raid_bdev": "raid_bdev1", 00:24:30.676 "method": "bdev_raid_add_base_bdev", 00:24:30.676 "req_id": 1 00:24:30.676 } 00:24:30.676 Got JSON-RPC error response 00:24:30.676 response: 00:24:30.676 { 00:24:30.676 "code": -22, 00:24:30.676 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:24:30.676 } 00:24:30.676 12:56:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:30.676 12:56:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:24:30.676 12:56:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:30.676 12:56:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:30.676 12:56:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:30.676 12:56:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:24:31.609 12:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:31.609 12:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:31.609 12:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:31.609 12:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:31.609 12:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:31.609 12:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:31.609 12:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:31.609 12:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:31.609 12:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:31.609 12:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:31.609 12:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:31.609 12:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:31.609 12:56:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.609 12:56:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:31.609 12:56:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.609 12:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:31.609 "name": "raid_bdev1", 00:24:31.609 "uuid": "129b31e9-4198-4560-90a3-7d3364a213ea", 00:24:31.609 "strip_size_kb": 0, 00:24:31.609 "state": "online", 00:24:31.609 "raid_level": "raid1", 00:24:31.609 "superblock": true, 00:24:31.609 "num_base_bdevs": 4, 00:24:31.609 "num_base_bdevs_discovered": 2, 00:24:31.609 "num_base_bdevs_operational": 2, 00:24:31.609 "base_bdevs_list": [ 00:24:31.609 { 00:24:31.609 "name": null, 00:24:31.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.609 "is_configured": false, 00:24:31.609 "data_offset": 0, 00:24:31.609 "data_size": 63488 00:24:31.609 }, 00:24:31.609 { 00:24:31.609 "name": null, 00:24:31.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.609 "is_configured": false, 00:24:31.609 "data_offset": 2048, 00:24:31.609 "data_size": 63488 00:24:31.609 }, 00:24:31.609 { 00:24:31.609 "name": "BaseBdev3", 00:24:31.609 "uuid": "0152feb4-cb0c-5bfc-a805-2d1b38846375", 00:24:31.609 "is_configured": true, 00:24:31.609 "data_offset": 2048, 00:24:31.609 "data_size": 63488 00:24:31.609 }, 00:24:31.609 { 00:24:31.609 "name": "BaseBdev4", 00:24:31.609 "uuid": "c82324c3-bb39-5598-bee4-fd1c8de7f846", 00:24:31.609 "is_configured": true, 00:24:31.609 "data_offset": 2048, 00:24:31.609 "data_size": 63488 00:24:31.609 } 00:24:31.609 ] 00:24:31.609 }' 00:24:31.609 12:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:31.609 12:56:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:31.868 12:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:31.868 12:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:31.868 12:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:31.868 12:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:31.868 12:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:31.868 12:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:31.868 12:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:31.868 12:56:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.868 12:56:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:31.868 12:56:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.868 12:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:31.868 "name": "raid_bdev1", 00:24:31.868 "uuid": "129b31e9-4198-4560-90a3-7d3364a213ea", 00:24:31.868 "strip_size_kb": 0, 00:24:31.868 "state": "online", 00:24:31.868 "raid_level": "raid1", 00:24:31.868 "superblock": true, 00:24:31.868 "num_base_bdevs": 4, 00:24:31.868 "num_base_bdevs_discovered": 2, 00:24:31.868 "num_base_bdevs_operational": 2, 00:24:31.868 "base_bdevs_list": [ 00:24:31.868 { 00:24:31.868 "name": null, 00:24:31.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.868 "is_configured": false, 00:24:31.868 "data_offset": 0, 00:24:31.868 "data_size": 63488 00:24:31.868 }, 00:24:31.868 { 00:24:31.868 "name": null, 00:24:31.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.868 "is_configured": false, 00:24:31.868 "data_offset": 2048, 00:24:31.868 "data_size": 63488 00:24:31.868 }, 00:24:31.868 { 00:24:31.868 "name": "BaseBdev3", 00:24:31.868 "uuid": "0152feb4-cb0c-5bfc-a805-2d1b38846375", 00:24:31.868 "is_configured": true, 00:24:31.868 "data_offset": 2048, 00:24:31.868 "data_size": 63488 00:24:31.868 }, 00:24:31.868 { 00:24:31.868 "name": "BaseBdev4", 00:24:31.868 "uuid": "c82324c3-bb39-5598-bee4-fd1c8de7f846", 00:24:31.868 "is_configured": true, 00:24:31.868 "data_offset": 2048, 00:24:31.868 "data_size": 63488 00:24:31.868 } 00:24:31.868 ] 00:24:31.868 }' 00:24:31.868 12:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:31.868 12:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:31.868 12:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:32.126 12:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:32.126 12:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75637 00:24:32.126 12:56:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75637 ']' 00:24:32.126 12:56:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75637 00:24:32.126 12:56:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:24:32.126 12:56:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:32.126 12:56:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75637 00:24:32.126 killing process with pid 75637 00:24:32.126 Received shutdown signal, test time was about 60.000000 seconds 00:24:32.126 00:24:32.126 Latency(us) 00:24:32.126 [2024-12-05T12:56:14.713Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.126 [2024-12-05T12:56:14.713Z] =================================================================================================================== 00:24:32.126 [2024-12-05T12:56:14.713Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:32.126 12:56:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:32.126 12:56:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:32.126 12:56:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75637' 00:24:32.126 12:56:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75637 00:24:32.126 [2024-12-05 12:56:14.514996] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:32.126 12:56:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75637 00:24:32.126 [2024-12-05 12:56:14.515088] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:32.126 [2024-12-05 12:56:14.515147] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:32.126 [2024-12-05 12:56:14.515156] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:24:32.386 [2024-12-05 12:56:14.756058] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:32.951 12:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:24:32.951 00:24:32.951 real 0m22.233s 00:24:32.951 user 0m26.120s 00:24:32.951 sys 0m2.886s 00:24:32.951 12:56:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:32.951 12:56:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:32.951 ************************************ 00:24:32.951 END TEST raid_rebuild_test_sb 00:24:32.951 ************************************ 00:24:32.951 12:56:15 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:24:32.951 12:56:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:24:32.951 12:56:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:32.951 12:56:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:32.951 ************************************ 00:24:32.951 START TEST raid_rebuild_test_io 00:24:32.951 ************************************ 00:24:32.951 12:56:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:24:32.951 12:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:24:32.951 12:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:24:32.951 12:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:24:32.951 12:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:24:32.951 12:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:24:32.951 12:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:24:32.951 12:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:32.951 12:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:24:32.951 12:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:32.951 12:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:32.951 12:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:24:32.951 12:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:32.951 12:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:32.951 12:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:24:32.951 12:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:32.951 12:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:32.951 12:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:24:32.951 12:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:32.952 12:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:32.952 12:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:32.952 12:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:24:32.952 12:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:24:32.952 12:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:24:32.952 12:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:24:32.952 12:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:24:32.952 12:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:24:32.952 12:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:24:32.952 12:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:24:32.952 12:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:24:32.952 12:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76361 00:24:32.952 12:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76361 00:24:32.952 12:56:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:32.952 12:56:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76361 ']' 00:24:32.952 12:56:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.952 12:56:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:32.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.952 12:56:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.952 12:56:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:32.952 12:56:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:32.952 [2024-12-05 12:56:15.446587] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:24:32.952 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:32.952 Zero copy mechanism will not be used. 00:24:32.952 [2024-12-05 12:56:15.446987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76361 ] 00:24:33.209 [2024-12-05 12:56:15.601706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.209 [2024-12-05 12:56:15.685026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.466 [2024-12-05 12:56:15.794875] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:33.466 [2024-12-05 12:56:15.794900] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:33.724 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:33.724 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:24:33.724 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:33.724 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:33.724 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.724 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:33.981 BaseBdev1_malloc 00:24:33.981 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.981 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:33.981 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.981 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:33.981 [2024-12-05 12:56:16.317220] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:33.981 [2024-12-05 12:56:16.317274] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:33.981 [2024-12-05 12:56:16.317291] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:33.981 [2024-12-05 12:56:16.317301] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:33.981 [2024-12-05 12:56:16.319028] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:33.981 [2024-12-05 12:56:16.319061] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:33.981 BaseBdev1 00:24:33.981 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.981 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:33.981 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:33.981 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.981 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:33.981 BaseBdev2_malloc 00:24:33.981 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.981 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:33.981 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.981 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:33.981 [2024-12-05 12:56:16.348525] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:33.981 [2024-12-05 12:56:16.348668] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:33.981 [2024-12-05 12:56:16.348690] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:33.981 [2024-12-05 12:56:16.348699] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:33.981 [2024-12-05 12:56:16.350386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:33.981 [2024-12-05 12:56:16.350418] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:33.981 BaseBdev2 00:24:33.981 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.981 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:33.981 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:33.982 BaseBdev3_malloc 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:33.982 [2024-12-05 12:56:16.394847] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:33.982 [2024-12-05 12:56:16.394894] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:33.982 [2024-12-05 12:56:16.394912] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:33.982 [2024-12-05 12:56:16.394922] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:33.982 [2024-12-05 12:56:16.396646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:33.982 [2024-12-05 12:56:16.396679] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:33.982 BaseBdev3 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:33.982 BaseBdev4_malloc 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:33.982 [2024-12-05 12:56:16.426507] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:33.982 [2024-12-05 12:56:16.426552] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:33.982 [2024-12-05 12:56:16.426565] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:24:33.982 [2024-12-05 12:56:16.426574] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:33.982 [2024-12-05 12:56:16.428249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:33.982 [2024-12-05 12:56:16.428375] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:33.982 BaseBdev4 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:33.982 spare_malloc 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:33.982 spare_delay 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:33.982 [2024-12-05 12:56:16.469967] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:33.982 [2024-12-05 12:56:16.470011] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:33.982 [2024-12-05 12:56:16.470024] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:24:33.982 [2024-12-05 12:56:16.470032] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:33.982 [2024-12-05 12:56:16.471733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:33.982 [2024-12-05 12:56:16.471763] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:33.982 spare 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:33.982 [2024-12-05 12:56:16.478006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:33.982 [2024-12-05 12:56:16.479530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:33.982 [2024-12-05 12:56:16.479580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:33.982 [2024-12-05 12:56:16.479621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:33.982 [2024-12-05 12:56:16.479688] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:33.982 [2024-12-05 12:56:16.479698] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:24:33.982 [2024-12-05 12:56:16.479912] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:33.982 [2024-12-05 12:56:16.480036] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:33.982 [2024-12-05 12:56:16.480044] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:33.982 [2024-12-05 12:56:16.480174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:33.982 "name": "raid_bdev1", 00:24:33.982 "uuid": "c24e2622-cecf-47d2-877d-3339c2760208", 00:24:33.982 "strip_size_kb": 0, 00:24:33.982 "state": "online", 00:24:33.982 "raid_level": "raid1", 00:24:33.982 "superblock": false, 00:24:33.982 "num_base_bdevs": 4, 00:24:33.982 "num_base_bdevs_discovered": 4, 00:24:33.982 "num_base_bdevs_operational": 4, 00:24:33.982 "base_bdevs_list": [ 00:24:33.982 { 00:24:33.982 "name": "BaseBdev1", 00:24:33.982 "uuid": "5a1ccc69-a1f5-5ba5-9664-3c7cfe58dd42", 00:24:33.982 "is_configured": true, 00:24:33.982 "data_offset": 0, 00:24:33.982 "data_size": 65536 00:24:33.982 }, 00:24:33.982 { 00:24:33.982 "name": "BaseBdev2", 00:24:33.982 "uuid": "473e2c54-e735-5f17-bc16-6230a8f67224", 00:24:33.982 "is_configured": true, 00:24:33.982 "data_offset": 0, 00:24:33.982 "data_size": 65536 00:24:33.982 }, 00:24:33.982 { 00:24:33.982 "name": "BaseBdev3", 00:24:33.982 "uuid": "de8ea0c0-8684-5f11-864a-ee7a8306ecaf", 00:24:33.982 "is_configured": true, 00:24:33.982 "data_offset": 0, 00:24:33.982 "data_size": 65536 00:24:33.982 }, 00:24:33.982 { 00:24:33.982 "name": "BaseBdev4", 00:24:33.982 "uuid": "d58074e5-ef18-51ee-b344-d4f8fc5551a8", 00:24:33.982 "is_configured": true, 00:24:33.982 "data_offset": 0, 00:24:33.982 "data_size": 65536 00:24:33.982 } 00:24:33.982 ] 00:24:33.982 }' 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:33.982 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:34.276 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:34.276 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:24:34.276 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.276 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:34.276 [2024-12-05 12:56:16.814357] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:34.277 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.277 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:24:34.277 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:34.277 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:34.277 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.277 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:34.277 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.534 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:24:34.534 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:24:34.534 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:24:34.534 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.534 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:34.534 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:24:34.534 [2024-12-05 12:56:16.878061] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:34.534 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.534 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:34.535 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:34.535 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:34.535 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:34.535 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:34.535 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:34.535 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:34.535 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:34.535 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:34.535 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:34.535 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:34.535 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:34.535 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.535 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:34.535 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.535 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:34.535 "name": "raid_bdev1", 00:24:34.535 "uuid": "c24e2622-cecf-47d2-877d-3339c2760208", 00:24:34.535 "strip_size_kb": 0, 00:24:34.535 "state": "online", 00:24:34.535 "raid_level": "raid1", 00:24:34.535 "superblock": false, 00:24:34.535 "num_base_bdevs": 4, 00:24:34.535 "num_base_bdevs_discovered": 3, 00:24:34.535 "num_base_bdevs_operational": 3, 00:24:34.535 "base_bdevs_list": [ 00:24:34.535 { 00:24:34.535 "name": null, 00:24:34.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:34.535 "is_configured": false, 00:24:34.535 "data_offset": 0, 00:24:34.535 "data_size": 65536 00:24:34.535 }, 00:24:34.535 { 00:24:34.535 "name": "BaseBdev2", 00:24:34.535 "uuid": "473e2c54-e735-5f17-bc16-6230a8f67224", 00:24:34.535 "is_configured": true, 00:24:34.535 "data_offset": 0, 00:24:34.535 "data_size": 65536 00:24:34.535 }, 00:24:34.535 { 00:24:34.535 "name": "BaseBdev3", 00:24:34.535 "uuid": "de8ea0c0-8684-5f11-864a-ee7a8306ecaf", 00:24:34.535 "is_configured": true, 00:24:34.535 "data_offset": 0, 00:24:34.535 "data_size": 65536 00:24:34.535 }, 00:24:34.535 { 00:24:34.535 "name": "BaseBdev4", 00:24:34.535 "uuid": "d58074e5-ef18-51ee-b344-d4f8fc5551a8", 00:24:34.535 "is_configured": true, 00:24:34.535 "data_offset": 0, 00:24:34.535 "data_size": 65536 00:24:34.535 } 00:24:34.535 ] 00:24:34.535 }' 00:24:34.535 12:56:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:34.535 12:56:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:34.535 [2024-12-05 12:56:16.946517] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:24:34.535 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:34.535 Zero copy mechanism will not be used. 00:24:34.535 Running I/O for 60 seconds... 00:24:34.793 12:56:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:34.793 12:56:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.793 12:56:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:34.793 [2024-12-05 12:56:17.211905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:34.793 12:56:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.793 12:56:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:24:34.793 [2024-12-05 12:56:17.265441] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:24:34.793 [2024-12-05 12:56:17.267175] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:35.052 [2024-12-05 12:56:17.380791] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:35.052 [2024-12-05 12:56:17.381919] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:35.052 [2024-12-05 12:56:17.604815] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:35.052 [2024-12-05 12:56:17.605192] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:35.618 [2024-12-05 12:56:17.937173] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:35.618 [2024-12-05 12:56:17.938286] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:35.618 180.00 IOPS, 540.00 MiB/s [2024-12-05T12:56:18.205Z] [2024-12-05 12:56:18.172588] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:35.618 [2024-12-05 12:56:18.173236] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:35.877 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:35.877 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:35.877 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:35.877 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:35.877 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:35.877 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:35.877 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:35.877 12:56:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.877 12:56:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:35.877 12:56:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.877 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:35.877 "name": "raid_bdev1", 00:24:35.877 "uuid": "c24e2622-cecf-47d2-877d-3339c2760208", 00:24:35.877 "strip_size_kb": 0, 00:24:35.877 "state": "online", 00:24:35.877 "raid_level": "raid1", 00:24:35.877 "superblock": false, 00:24:35.877 "num_base_bdevs": 4, 00:24:35.877 "num_base_bdevs_discovered": 4, 00:24:35.877 "num_base_bdevs_operational": 4, 00:24:35.877 "process": { 00:24:35.877 "type": "rebuild", 00:24:35.877 "target": "spare", 00:24:35.877 "progress": { 00:24:35.877 "blocks": 10240, 00:24:35.877 "percent": 15 00:24:35.877 } 00:24:35.877 }, 00:24:35.877 "base_bdevs_list": [ 00:24:35.877 { 00:24:35.877 "name": "spare", 00:24:35.877 "uuid": "4f9b3ba4-71e6-50a3-a19f-e61c60be6db2", 00:24:35.877 "is_configured": true, 00:24:35.877 "data_offset": 0, 00:24:35.877 "data_size": 65536 00:24:35.877 }, 00:24:35.877 { 00:24:35.877 "name": "BaseBdev2", 00:24:35.877 "uuid": "473e2c54-e735-5f17-bc16-6230a8f67224", 00:24:35.877 "is_configured": true, 00:24:35.877 "data_offset": 0, 00:24:35.877 "data_size": 65536 00:24:35.877 }, 00:24:35.877 { 00:24:35.877 "name": "BaseBdev3", 00:24:35.877 "uuid": "de8ea0c0-8684-5f11-864a-ee7a8306ecaf", 00:24:35.877 "is_configured": true, 00:24:35.877 "data_offset": 0, 00:24:35.877 "data_size": 65536 00:24:35.877 }, 00:24:35.877 { 00:24:35.877 "name": "BaseBdev4", 00:24:35.877 "uuid": "d58074e5-ef18-51ee-b344-d4f8fc5551a8", 00:24:35.877 "is_configured": true, 00:24:35.877 "data_offset": 0, 00:24:35.877 "data_size": 65536 00:24:35.877 } 00:24:35.877 ] 00:24:35.877 }' 00:24:35.877 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:35.877 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:35.877 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:35.877 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:35.877 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:35.877 12:56:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.877 12:56:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:35.877 [2024-12-05 12:56:18.345806] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:36.135 [2024-12-05 12:56:18.485354] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:36.135 [2024-12-05 12:56:18.499157] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:36.135 [2024-12-05 12:56:18.499297] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:36.135 [2024-12-05 12:56:18.499327] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:36.135 [2024-12-05 12:56:18.519956] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:24:36.135 12:56:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.135 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:36.135 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:36.135 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:36.135 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:36.135 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:36.135 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:36.135 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:36.135 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:36.135 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:36.135 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:36.135 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:36.136 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:36.136 12:56:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.136 12:56:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:36.136 12:56:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.136 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:36.136 "name": "raid_bdev1", 00:24:36.136 "uuid": "c24e2622-cecf-47d2-877d-3339c2760208", 00:24:36.136 "strip_size_kb": 0, 00:24:36.136 "state": "online", 00:24:36.136 "raid_level": "raid1", 00:24:36.136 "superblock": false, 00:24:36.136 "num_base_bdevs": 4, 00:24:36.136 "num_base_bdevs_discovered": 3, 00:24:36.136 "num_base_bdevs_operational": 3, 00:24:36.136 "base_bdevs_list": [ 00:24:36.136 { 00:24:36.136 "name": null, 00:24:36.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:36.136 "is_configured": false, 00:24:36.136 "data_offset": 0, 00:24:36.136 "data_size": 65536 00:24:36.136 }, 00:24:36.136 { 00:24:36.136 "name": "BaseBdev2", 00:24:36.136 "uuid": "473e2c54-e735-5f17-bc16-6230a8f67224", 00:24:36.136 "is_configured": true, 00:24:36.136 "data_offset": 0, 00:24:36.136 "data_size": 65536 00:24:36.136 }, 00:24:36.136 { 00:24:36.136 "name": "BaseBdev3", 00:24:36.136 "uuid": "de8ea0c0-8684-5f11-864a-ee7a8306ecaf", 00:24:36.136 "is_configured": true, 00:24:36.136 "data_offset": 0, 00:24:36.136 "data_size": 65536 00:24:36.136 }, 00:24:36.136 { 00:24:36.136 "name": "BaseBdev4", 00:24:36.136 "uuid": "d58074e5-ef18-51ee-b344-d4f8fc5551a8", 00:24:36.136 "is_configured": true, 00:24:36.136 "data_offset": 0, 00:24:36.136 "data_size": 65536 00:24:36.136 } 00:24:36.136 ] 00:24:36.136 }' 00:24:36.136 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:36.136 12:56:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:36.395 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:36.395 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:36.395 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:36.395 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:36.395 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:36.395 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:36.395 12:56:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.395 12:56:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:36.395 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:36.395 12:56:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.395 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:36.395 "name": "raid_bdev1", 00:24:36.395 "uuid": "c24e2622-cecf-47d2-877d-3339c2760208", 00:24:36.395 "strip_size_kb": 0, 00:24:36.395 "state": "online", 00:24:36.395 "raid_level": "raid1", 00:24:36.395 "superblock": false, 00:24:36.395 "num_base_bdevs": 4, 00:24:36.395 "num_base_bdevs_discovered": 3, 00:24:36.395 "num_base_bdevs_operational": 3, 00:24:36.395 "base_bdevs_list": [ 00:24:36.395 { 00:24:36.395 "name": null, 00:24:36.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:36.395 "is_configured": false, 00:24:36.395 "data_offset": 0, 00:24:36.395 "data_size": 65536 00:24:36.395 }, 00:24:36.395 { 00:24:36.395 "name": "BaseBdev2", 00:24:36.395 "uuid": "473e2c54-e735-5f17-bc16-6230a8f67224", 00:24:36.395 "is_configured": true, 00:24:36.395 "data_offset": 0, 00:24:36.395 "data_size": 65536 00:24:36.395 }, 00:24:36.395 { 00:24:36.395 "name": "BaseBdev3", 00:24:36.395 "uuid": "de8ea0c0-8684-5f11-864a-ee7a8306ecaf", 00:24:36.395 "is_configured": true, 00:24:36.395 "data_offset": 0, 00:24:36.395 "data_size": 65536 00:24:36.395 }, 00:24:36.395 { 00:24:36.395 "name": "BaseBdev4", 00:24:36.395 "uuid": "d58074e5-ef18-51ee-b344-d4f8fc5551a8", 00:24:36.395 "is_configured": true, 00:24:36.395 "data_offset": 0, 00:24:36.395 "data_size": 65536 00:24:36.395 } 00:24:36.395 ] 00:24:36.395 }' 00:24:36.395 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:36.395 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:36.395 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:36.395 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:36.395 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:36.395 12:56:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.395 12:56:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:36.395 [2024-12-05 12:56:18.945063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:36.395 169.50 IOPS, 508.50 MiB/s [2024-12-05T12:56:18.982Z] 12:56:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.395 12:56:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:24:36.395 [2024-12-05 12:56:18.971443] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:24:36.395 [2024-12-05 12:56:18.973093] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:36.653 [2024-12-05 12:56:19.085238] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:36.653 [2024-12-05 12:56:19.086258] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:36.911 [2024-12-05 12:56:19.301323] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:36.911 [2024-12-05 12:56:19.302019] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:37.477 [2024-12-05 12:56:19.773851] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:37.477 150.00 IOPS, 450.00 MiB/s [2024-12-05T12:56:20.064Z] 12:56:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:37.477 12:56:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:37.477 12:56:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:37.477 12:56:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:37.477 12:56:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:37.477 12:56:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:37.477 12:56:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:37.477 12:56:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.477 12:56:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:37.477 12:56:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.477 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:37.477 "name": "raid_bdev1", 00:24:37.477 "uuid": "c24e2622-cecf-47d2-877d-3339c2760208", 00:24:37.477 "strip_size_kb": 0, 00:24:37.477 "state": "online", 00:24:37.477 "raid_level": "raid1", 00:24:37.477 "superblock": false, 00:24:37.477 "num_base_bdevs": 4, 00:24:37.477 "num_base_bdevs_discovered": 4, 00:24:37.477 "num_base_bdevs_operational": 4, 00:24:37.477 "process": { 00:24:37.477 "type": "rebuild", 00:24:37.477 "target": "spare", 00:24:37.477 "progress": { 00:24:37.477 "blocks": 12288, 00:24:37.477 "percent": 18 00:24:37.477 } 00:24:37.477 }, 00:24:37.477 "base_bdevs_list": [ 00:24:37.477 { 00:24:37.477 "name": "spare", 00:24:37.477 "uuid": "4f9b3ba4-71e6-50a3-a19f-e61c60be6db2", 00:24:37.477 "is_configured": true, 00:24:37.477 "data_offset": 0, 00:24:37.477 "data_size": 65536 00:24:37.477 }, 00:24:37.477 { 00:24:37.477 "name": "BaseBdev2", 00:24:37.477 "uuid": "473e2c54-e735-5f17-bc16-6230a8f67224", 00:24:37.477 "is_configured": true, 00:24:37.477 "data_offset": 0, 00:24:37.477 "data_size": 65536 00:24:37.477 }, 00:24:37.477 { 00:24:37.477 "name": "BaseBdev3", 00:24:37.477 "uuid": "de8ea0c0-8684-5f11-864a-ee7a8306ecaf", 00:24:37.477 "is_configured": true, 00:24:37.477 "data_offset": 0, 00:24:37.477 "data_size": 65536 00:24:37.477 }, 00:24:37.477 { 00:24:37.477 "name": "BaseBdev4", 00:24:37.477 "uuid": "d58074e5-ef18-51ee-b344-d4f8fc5551a8", 00:24:37.477 "is_configured": true, 00:24:37.477 "data_offset": 0, 00:24:37.477 "data_size": 65536 00:24:37.477 } 00:24:37.477 ] 00:24:37.477 }' 00:24:37.477 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:37.477 [2024-12-05 12:56:20.014725] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:37.477 [2024-12-05 12:56:20.015237] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:37.477 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:37.477 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:37.737 [2024-12-05 12:56:20.085046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:37.737 [2024-12-05 12:56:20.137052] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:37.737 [2024-12-05 12:56:20.156967] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:24:37.737 [2024-12-05 12:56:20.156996] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:24:37.737 [2024-12-05 12:56:20.157587] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:37.737 "name": "raid_bdev1", 00:24:37.737 "uuid": "c24e2622-cecf-47d2-877d-3339c2760208", 00:24:37.737 "strip_size_kb": 0, 00:24:37.737 "state": "online", 00:24:37.737 "raid_level": "raid1", 00:24:37.737 "superblock": false, 00:24:37.737 "num_base_bdevs": 4, 00:24:37.737 "num_base_bdevs_discovered": 3, 00:24:37.737 "num_base_bdevs_operational": 3, 00:24:37.737 "process": { 00:24:37.737 "type": "rebuild", 00:24:37.737 "target": "spare", 00:24:37.737 "progress": { 00:24:37.737 "blocks": 16384, 00:24:37.737 "percent": 25 00:24:37.737 } 00:24:37.737 }, 00:24:37.737 "base_bdevs_list": [ 00:24:37.737 { 00:24:37.737 "name": "spare", 00:24:37.737 "uuid": "4f9b3ba4-71e6-50a3-a19f-e61c60be6db2", 00:24:37.737 "is_configured": true, 00:24:37.737 "data_offset": 0, 00:24:37.737 "data_size": 65536 00:24:37.737 }, 00:24:37.737 { 00:24:37.737 "name": null, 00:24:37.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:37.737 "is_configured": false, 00:24:37.737 "data_offset": 0, 00:24:37.737 "data_size": 65536 00:24:37.737 }, 00:24:37.737 { 00:24:37.737 "name": "BaseBdev3", 00:24:37.737 "uuid": "de8ea0c0-8684-5f11-864a-ee7a8306ecaf", 00:24:37.737 "is_configured": true, 00:24:37.737 "data_offset": 0, 00:24:37.737 "data_size": 65536 00:24:37.737 }, 00:24:37.737 { 00:24:37.737 "name": "BaseBdev4", 00:24:37.737 "uuid": "d58074e5-ef18-51ee-b344-d4f8fc5551a8", 00:24:37.737 "is_configured": true, 00:24:37.737 "data_offset": 0, 00:24:37.737 "data_size": 65536 00:24:37.737 } 00:24:37.737 ] 00:24:37.737 }' 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=373 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.737 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:37.737 "name": "raid_bdev1", 00:24:37.737 "uuid": "c24e2622-cecf-47d2-877d-3339c2760208", 00:24:37.737 "strip_size_kb": 0, 00:24:37.737 "state": "online", 00:24:37.737 "raid_level": "raid1", 00:24:37.737 "superblock": false, 00:24:37.737 "num_base_bdevs": 4, 00:24:37.737 "num_base_bdevs_discovered": 3, 00:24:37.737 "num_base_bdevs_operational": 3, 00:24:37.737 "process": { 00:24:37.737 "type": "rebuild", 00:24:37.737 "target": "spare", 00:24:37.737 "progress": { 00:24:37.737 "blocks": 18432, 00:24:37.737 "percent": 28 00:24:37.738 } 00:24:37.738 }, 00:24:37.738 "base_bdevs_list": [ 00:24:37.738 { 00:24:37.738 "name": "spare", 00:24:37.738 "uuid": "4f9b3ba4-71e6-50a3-a19f-e61c60be6db2", 00:24:37.738 "is_configured": true, 00:24:37.738 "data_offset": 0, 00:24:37.738 "data_size": 65536 00:24:37.738 }, 00:24:37.738 { 00:24:37.738 "name": null, 00:24:37.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:37.738 "is_configured": false, 00:24:37.738 "data_offset": 0, 00:24:37.738 "data_size": 65536 00:24:37.738 }, 00:24:37.738 { 00:24:37.738 "name": "BaseBdev3", 00:24:37.738 "uuid": "de8ea0c0-8684-5f11-864a-ee7a8306ecaf", 00:24:37.738 "is_configured": true, 00:24:37.738 "data_offset": 0, 00:24:37.738 "data_size": 65536 00:24:37.738 }, 00:24:37.738 { 00:24:37.738 "name": "BaseBdev4", 00:24:37.738 "uuid": "d58074e5-ef18-51ee-b344-d4f8fc5551a8", 00:24:37.738 "is_configured": true, 00:24:37.738 "data_offset": 0, 00:24:37.738 "data_size": 65536 00:24:37.738 } 00:24:37.738 ] 00:24:37.738 }' 00:24:37.738 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:37.996 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:37.996 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:37.996 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:37.996 12:56:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:37.996 [2024-12-05 12:56:20.487600] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:24:38.253 [2024-12-05 12:56:20.725856] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:24:39.101 132.50 IOPS, 397.50 MiB/s [2024-12-05T12:56:21.688Z] 12:56:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:39.101 12:56:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:39.101 12:56:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:39.101 12:56:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:39.101 12:56:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:39.101 12:56:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:39.101 12:56:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:39.101 12:56:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.101 12:56:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:39.101 12:56:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:39.101 12:56:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.101 12:56:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:39.101 "name": "raid_bdev1", 00:24:39.101 "uuid": "c24e2622-cecf-47d2-877d-3339c2760208", 00:24:39.101 "strip_size_kb": 0, 00:24:39.101 "state": "online", 00:24:39.101 "raid_level": "raid1", 00:24:39.101 "superblock": false, 00:24:39.101 "num_base_bdevs": 4, 00:24:39.101 "num_base_bdevs_discovered": 3, 00:24:39.101 "num_base_bdevs_operational": 3, 00:24:39.101 "process": { 00:24:39.101 "type": "rebuild", 00:24:39.101 "target": "spare", 00:24:39.101 "progress": { 00:24:39.101 "blocks": 36864, 00:24:39.101 "percent": 56 00:24:39.101 } 00:24:39.101 }, 00:24:39.101 "base_bdevs_list": [ 00:24:39.101 { 00:24:39.101 "name": "spare", 00:24:39.101 "uuid": "4f9b3ba4-71e6-50a3-a19f-e61c60be6db2", 00:24:39.101 "is_configured": true, 00:24:39.101 "data_offset": 0, 00:24:39.101 "data_size": 65536 00:24:39.101 }, 00:24:39.101 { 00:24:39.101 "name": null, 00:24:39.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:39.101 "is_configured": false, 00:24:39.101 "data_offset": 0, 00:24:39.101 "data_size": 65536 00:24:39.101 }, 00:24:39.101 { 00:24:39.101 "name": "BaseBdev3", 00:24:39.101 "uuid": "de8ea0c0-8684-5f11-864a-ee7a8306ecaf", 00:24:39.101 "is_configured": true, 00:24:39.101 "data_offset": 0, 00:24:39.101 "data_size": 65536 00:24:39.101 }, 00:24:39.101 { 00:24:39.101 "name": "BaseBdev4", 00:24:39.101 "uuid": "d58074e5-ef18-51ee-b344-d4f8fc5551a8", 00:24:39.101 "is_configured": true, 00:24:39.101 "data_offset": 0, 00:24:39.101 "data_size": 65536 00:24:39.101 } 00:24:39.101 ] 00:24:39.101 }' 00:24:39.101 12:56:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:39.101 12:56:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:39.101 12:56:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:39.101 [2024-12-05 12:56:21.460705] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:24:39.101 12:56:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:39.101 12:56:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:39.383 [2024-12-05 12:56:21.895939] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:24:39.641 119.00 IOPS, 357.00 MiB/s [2024-12-05T12:56:22.228Z] [2024-12-05 12:56:22.003685] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:24:39.641 [2024-12-05 12:56:22.003932] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:24:39.900 [2024-12-05 12:56:22.437442] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:24:39.900 12:56:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:39.900 12:56:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:39.900 12:56:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:39.900 12:56:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:39.900 12:56:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:39.900 12:56:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:39.900 12:56:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:39.900 12:56:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:39.900 12:56:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.900 12:56:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:40.165 12:56:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.165 12:56:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:40.165 "name": "raid_bdev1", 00:24:40.165 "uuid": "c24e2622-cecf-47d2-877d-3339c2760208", 00:24:40.165 "strip_size_kb": 0, 00:24:40.165 "state": "online", 00:24:40.165 "raid_level": "raid1", 00:24:40.165 "superblock": false, 00:24:40.165 "num_base_bdevs": 4, 00:24:40.165 "num_base_bdevs_discovered": 3, 00:24:40.165 "num_base_bdevs_operational": 3, 00:24:40.165 "process": { 00:24:40.165 "type": "rebuild", 00:24:40.165 "target": "spare", 00:24:40.165 "progress": { 00:24:40.165 "blocks": 53248, 00:24:40.165 "percent": 81 00:24:40.165 } 00:24:40.165 }, 00:24:40.165 "base_bdevs_list": [ 00:24:40.165 { 00:24:40.165 "name": "spare", 00:24:40.165 "uuid": "4f9b3ba4-71e6-50a3-a19f-e61c60be6db2", 00:24:40.165 "is_configured": true, 00:24:40.165 "data_offset": 0, 00:24:40.165 "data_size": 65536 00:24:40.165 }, 00:24:40.165 { 00:24:40.165 "name": null, 00:24:40.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:40.165 "is_configured": false, 00:24:40.165 "data_offset": 0, 00:24:40.165 "data_size": 65536 00:24:40.165 }, 00:24:40.165 { 00:24:40.165 "name": "BaseBdev3", 00:24:40.165 "uuid": "de8ea0c0-8684-5f11-864a-ee7a8306ecaf", 00:24:40.165 "is_configured": true, 00:24:40.165 "data_offset": 0, 00:24:40.165 "data_size": 65536 00:24:40.165 }, 00:24:40.165 { 00:24:40.165 "name": "BaseBdev4", 00:24:40.165 "uuid": "d58074e5-ef18-51ee-b344-d4f8fc5551a8", 00:24:40.165 "is_configured": true, 00:24:40.165 "data_offset": 0, 00:24:40.165 "data_size": 65536 00:24:40.165 } 00:24:40.165 ] 00:24:40.165 }' 00:24:40.165 12:56:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:40.165 12:56:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:40.165 12:56:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:40.165 12:56:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:40.165 12:56:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:40.424 [2024-12-05 12:56:22.766028] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:24:40.682 105.00 IOPS, 315.00 MiB/s [2024-12-05T12:56:23.269Z] [2024-12-05 12:56:23.196913] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:40.941 [2024-12-05 12:56:23.301987] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:40.941 [2024-12-05 12:56:23.304051] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:41.200 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:41.200 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:41.200 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:41.200 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:41.200 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:41.200 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:41.200 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:41.200 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:41.200 12:56:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.200 12:56:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:41.200 12:56:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.200 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:41.200 "name": "raid_bdev1", 00:24:41.200 "uuid": "c24e2622-cecf-47d2-877d-3339c2760208", 00:24:41.200 "strip_size_kb": 0, 00:24:41.200 "state": "online", 00:24:41.200 "raid_level": "raid1", 00:24:41.200 "superblock": false, 00:24:41.200 "num_base_bdevs": 4, 00:24:41.200 "num_base_bdevs_discovered": 3, 00:24:41.200 "num_base_bdevs_operational": 3, 00:24:41.200 "base_bdevs_list": [ 00:24:41.200 { 00:24:41.200 "name": "spare", 00:24:41.200 "uuid": "4f9b3ba4-71e6-50a3-a19f-e61c60be6db2", 00:24:41.200 "is_configured": true, 00:24:41.200 "data_offset": 0, 00:24:41.200 "data_size": 65536 00:24:41.200 }, 00:24:41.200 { 00:24:41.200 "name": null, 00:24:41.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.200 "is_configured": false, 00:24:41.200 "data_offset": 0, 00:24:41.200 "data_size": 65536 00:24:41.200 }, 00:24:41.200 { 00:24:41.200 "name": "BaseBdev3", 00:24:41.200 "uuid": "de8ea0c0-8684-5f11-864a-ee7a8306ecaf", 00:24:41.200 "is_configured": true, 00:24:41.200 "data_offset": 0, 00:24:41.200 "data_size": 65536 00:24:41.200 }, 00:24:41.200 { 00:24:41.200 "name": "BaseBdev4", 00:24:41.200 "uuid": "d58074e5-ef18-51ee-b344-d4f8fc5551a8", 00:24:41.200 "is_configured": true, 00:24:41.200 "data_offset": 0, 00:24:41.200 "data_size": 65536 00:24:41.200 } 00:24:41.200 ] 00:24:41.200 }' 00:24:41.200 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:41.200 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:41.200 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:41.200 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:24:41.200 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:24:41.200 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:41.200 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:41.200 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:41.200 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:41.200 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:41.200 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:41.200 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:41.200 12:56:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.200 12:56:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:41.200 12:56:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.200 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:41.200 "name": "raid_bdev1", 00:24:41.200 "uuid": "c24e2622-cecf-47d2-877d-3339c2760208", 00:24:41.200 "strip_size_kb": 0, 00:24:41.200 "state": "online", 00:24:41.200 "raid_level": "raid1", 00:24:41.200 "superblock": false, 00:24:41.200 "num_base_bdevs": 4, 00:24:41.200 "num_base_bdevs_discovered": 3, 00:24:41.200 "num_base_bdevs_operational": 3, 00:24:41.200 "base_bdevs_list": [ 00:24:41.200 { 00:24:41.200 "name": "spare", 00:24:41.200 "uuid": "4f9b3ba4-71e6-50a3-a19f-e61c60be6db2", 00:24:41.200 "is_configured": true, 00:24:41.200 "data_offset": 0, 00:24:41.200 "data_size": 65536 00:24:41.200 }, 00:24:41.200 { 00:24:41.200 "name": null, 00:24:41.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.200 "is_configured": false, 00:24:41.200 "data_offset": 0, 00:24:41.200 "data_size": 65536 00:24:41.200 }, 00:24:41.200 { 00:24:41.200 "name": "BaseBdev3", 00:24:41.200 "uuid": "de8ea0c0-8684-5f11-864a-ee7a8306ecaf", 00:24:41.200 "is_configured": true, 00:24:41.200 "data_offset": 0, 00:24:41.200 "data_size": 65536 00:24:41.200 }, 00:24:41.200 { 00:24:41.200 "name": "BaseBdev4", 00:24:41.200 "uuid": "d58074e5-ef18-51ee-b344-d4f8fc5551a8", 00:24:41.200 "is_configured": true, 00:24:41.200 "data_offset": 0, 00:24:41.200 "data_size": 65536 00:24:41.200 } 00:24:41.200 ] 00:24:41.200 }' 00:24:41.200 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:41.200 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:41.200 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:41.201 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:41.201 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:41.201 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:41.201 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:41.201 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:41.201 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:41.201 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:41.201 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:41.201 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:41.201 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:41.201 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:41.201 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:41.201 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:41.201 12:56:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.201 12:56:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:41.460 12:56:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.460 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:41.460 "name": "raid_bdev1", 00:24:41.460 "uuid": "c24e2622-cecf-47d2-877d-3339c2760208", 00:24:41.460 "strip_size_kb": 0, 00:24:41.460 "state": "online", 00:24:41.460 "raid_level": "raid1", 00:24:41.460 "superblock": false, 00:24:41.460 "num_base_bdevs": 4, 00:24:41.460 "num_base_bdevs_discovered": 3, 00:24:41.460 "num_base_bdevs_operational": 3, 00:24:41.460 "base_bdevs_list": [ 00:24:41.460 { 00:24:41.460 "name": "spare", 00:24:41.460 "uuid": "4f9b3ba4-71e6-50a3-a19f-e61c60be6db2", 00:24:41.460 "is_configured": true, 00:24:41.460 "data_offset": 0, 00:24:41.460 "data_size": 65536 00:24:41.460 }, 00:24:41.460 { 00:24:41.460 "name": null, 00:24:41.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.460 "is_configured": false, 00:24:41.460 "data_offset": 0, 00:24:41.460 "data_size": 65536 00:24:41.460 }, 00:24:41.460 { 00:24:41.460 "name": "BaseBdev3", 00:24:41.460 "uuid": "de8ea0c0-8684-5f11-864a-ee7a8306ecaf", 00:24:41.460 "is_configured": true, 00:24:41.460 "data_offset": 0, 00:24:41.460 "data_size": 65536 00:24:41.460 }, 00:24:41.460 { 00:24:41.460 "name": "BaseBdev4", 00:24:41.460 "uuid": "d58074e5-ef18-51ee-b344-d4f8fc5551a8", 00:24:41.460 "is_configured": true, 00:24:41.460 "data_offset": 0, 00:24:41.460 "data_size": 65536 00:24:41.460 } 00:24:41.460 ] 00:24:41.460 }' 00:24:41.460 12:56:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:41.460 12:56:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:41.721 95.00 IOPS, 285.00 MiB/s [2024-12-05T12:56:24.308Z] 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:41.721 12:56:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.721 12:56:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:41.721 [2024-12-05 12:56:24.102526] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:41.721 [2024-12-05 12:56:24.102550] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:41.721 00:24:41.721 Latency(us) 00:24:41.721 [2024-12-05T12:56:24.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.721 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:24:41.721 raid_bdev1 : 7.21 92.79 278.36 0.00 0.00 15479.72 270.97 115343.36 00:24:41.721 [2024-12-05T12:56:24.308Z] =================================================================================================================== 00:24:41.721 [2024-12-05T12:56:24.308Z] Total : 92.79 278.36 0.00 0.00 15479.72 270.97 115343.36 00:24:41.721 { 00:24:41.721 "results": [ 00:24:41.721 { 00:24:41.721 "job": "raid_bdev1", 00:24:41.721 "core_mask": "0x1", 00:24:41.721 "workload": "randrw", 00:24:41.721 "percentage": 50, 00:24:41.721 "status": "finished", 00:24:41.721 "queue_depth": 2, 00:24:41.721 "io_size": 3145728, 00:24:41.721 "runtime": 7.210175, 00:24:41.721 "iops": 92.78554265326432, 00:24:41.721 "mibps": 278.35662795979295, 00:24:41.721 "io_failed": 0, 00:24:41.721 "io_timeout": 0, 00:24:41.721 "avg_latency_us": 15479.717456594228, 00:24:41.721 "min_latency_us": 270.9661538461539, 00:24:41.721 "max_latency_us": 115343.36 00:24:41.721 } 00:24:41.721 ], 00:24:41.721 "core_count": 1 00:24:41.721 } 00:24:41.721 [2024-12-05 12:56:24.170808] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:41.721 [2024-12-05 12:56:24.170868] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:41.721 [2024-12-05 12:56:24.170955] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:41.721 [2024-12-05 12:56:24.170963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:41.721 12:56:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.721 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:24:41.721 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:41.721 12:56:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.721 12:56:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:41.721 12:56:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.721 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:24:41.721 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:41.721 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:24:41.721 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:24:41.721 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:41.721 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:24:41.721 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:41.721 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:41.721 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:41.721 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:24:41.721 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:41.721 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:41.721 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:24:41.980 /dev/nbd0 00:24:41.980 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:41.980 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:41.980 12:56:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:41.980 12:56:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:24:41.981 12:56:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:41.981 12:56:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:41.981 12:56:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:41.981 12:56:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:24:41.981 12:56:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:41.981 12:56:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:41.981 12:56:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:41.981 1+0 records in 00:24:41.981 1+0 records out 00:24:41.981 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321564 s, 12.7 MB/s 00:24:41.981 12:56:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:41.981 12:56:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:24:41.981 12:56:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:41.981 12:56:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:41.981 12:56:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:24:41.981 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:41.981 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:41.981 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:24:41.981 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:24:41.981 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:24:41.981 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:24:41.981 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:24:41.981 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:24:41.981 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:41.981 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:24:41.981 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:41.981 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:41.981 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:41.981 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:24:41.981 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:41.981 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:41.981 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:24:42.294 /dev/nbd1 00:24:42.294 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:42.294 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:42.294 12:56:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:24:42.294 12:56:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:24:42.294 12:56:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:42.294 12:56:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:42.294 12:56:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:24:42.294 12:56:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:24:42.294 12:56:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:42.294 12:56:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:42.294 12:56:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:42.294 1+0 records in 00:24:42.294 1+0 records out 00:24:42.294 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269768 s, 15.2 MB/s 00:24:42.294 12:56:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:42.294 12:56:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:24:42.294 12:56:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:42.294 12:56:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:42.294 12:56:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:24:42.294 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:42.294 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:42.294 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:42.294 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:24:42.294 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:42.294 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:42.294 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:42.294 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:24:42.294 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:42.294 12:56:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:24:42.555 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:42.555 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:42.555 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:42.555 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:42.555 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:42.555 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:42.555 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:24:42.555 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:42.555 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:24:42.555 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:24:42.555 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:24:42.555 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:42.555 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:24:42.555 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:42.555 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:42.555 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:42.555 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:24:42.555 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:42.555 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:42.555 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:24:42.813 /dev/nbd1 00:24:42.813 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:42.813 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:42.813 12:56:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:24:42.813 12:56:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:24:42.813 12:56:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:42.813 12:56:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:42.813 12:56:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:24:42.813 12:56:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:24:42.813 12:56:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:42.813 12:56:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:42.813 12:56:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:42.813 1+0 records in 00:24:42.813 1+0 records out 00:24:42.813 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220156 s, 18.6 MB/s 00:24:42.813 12:56:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:42.813 12:56:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:24:42.813 12:56:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:42.813 12:56:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:42.813 12:56:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:24:42.813 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:42.813 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:42.813 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:42.813 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:24:42.813 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:42.813 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:42.813 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:42.813 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:24:42.813 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:42.813 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:24:43.070 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:43.070 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:43.070 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:43.070 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:43.070 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:43.070 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:43.070 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:24:43.070 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:43.070 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:24:43.070 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:43.070 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:43.070 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:43.070 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:24:43.070 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:43.070 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:43.336 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:43.336 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:43.336 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:43.336 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:43.336 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:43.336 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:43.336 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:24:43.336 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:43.336 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:24:43.336 12:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76361 00:24:43.336 12:56:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76361 ']' 00:24:43.336 12:56:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76361 00:24:43.336 12:56:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:24:43.336 12:56:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:43.336 12:56:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76361 00:24:43.336 killing process with pid 76361 00:24:43.336 Received shutdown signal, test time was about 8.812420 seconds 00:24:43.336 00:24:43.336 Latency(us) 00:24:43.336 [2024-12-05T12:56:25.923Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:43.336 [2024-12-05T12:56:25.923Z] =================================================================================================================== 00:24:43.336 [2024-12-05T12:56:25.923Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:43.336 12:56:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:43.336 12:56:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:43.336 12:56:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76361' 00:24:43.336 12:56:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76361 00:24:43.336 [2024-12-05 12:56:25.760733] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:43.336 12:56:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76361 00:24:43.592 [2024-12-05 12:56:25.966362] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:24:44.158 00:24:44.158 real 0m11.190s 00:24:44.158 user 0m14.085s 00:24:44.158 sys 0m1.230s 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:44.158 ************************************ 00:24:44.158 END TEST raid_rebuild_test_io 00:24:44.158 ************************************ 00:24:44.158 12:56:26 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:24:44.158 12:56:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:24:44.158 12:56:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:44.158 12:56:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:44.158 ************************************ 00:24:44.158 START TEST raid_rebuild_test_sb_io 00:24:44.158 ************************************ 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:24:44.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76760 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76760 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 76760 ']' 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:44.158 12:56:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:44.158 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:44.158 Zero copy mechanism will not be used. 00:24:44.158 [2024-12-05 12:56:26.686072] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:24:44.158 [2024-12-05 12:56:26.686189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76760 ] 00:24:44.415 [2024-12-05 12:56:26.845465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.415 [2024-12-05 12:56:26.946756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.671 [2024-12-05 12:56:27.084243] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:44.671 [2024-12-05 12:56:27.084296] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:45.240 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:45.240 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:24:45.240 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:45.240 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:45.240 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.240 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:45.240 BaseBdev1_malloc 00:24:45.240 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.240 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:45.240 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.240 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:45.240 [2024-12-05 12:56:27.562339] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:45.240 [2024-12-05 12:56:27.562397] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:45.240 [2024-12-05 12:56:27.562417] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:45.240 [2024-12-05 12:56:27.562428] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:45.240 [2024-12-05 12:56:27.564570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:45.240 [2024-12-05 12:56:27.564608] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:45.240 BaseBdev1 00:24:45.240 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:45.241 BaseBdev2_malloc 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:45.241 [2024-12-05 12:56:27.602453] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:45.241 [2024-12-05 12:56:27.602631] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:45.241 [2024-12-05 12:56:27.602740] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:45.241 [2024-12-05 12:56:27.602817] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:45.241 [2024-12-05 12:56:27.604975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:45.241 [2024-12-05 12:56:27.605011] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:45.241 BaseBdev2 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:45.241 BaseBdev3_malloc 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:45.241 [2024-12-05 12:56:27.670062] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:45.241 [2024-12-05 12:56:27.670123] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:45.241 [2024-12-05 12:56:27.670147] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:45.241 [2024-12-05 12:56:27.670159] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:45.241 [2024-12-05 12:56:27.672321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:45.241 [2024-12-05 12:56:27.672463] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:45.241 BaseBdev3 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:45.241 BaseBdev4_malloc 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:45.241 [2024-12-05 12:56:27.710698] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:45.241 [2024-12-05 12:56:27.710852] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:45.241 [2024-12-05 12:56:27.710875] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:24:45.241 [2024-12-05 12:56:27.710886] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:45.241 [2024-12-05 12:56:27.713082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:45.241 [2024-12-05 12:56:27.713122] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:45.241 BaseBdev4 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:45.241 spare_malloc 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:45.241 spare_delay 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:45.241 [2024-12-05 12:56:27.755277] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:45.241 [2024-12-05 12:56:27.755326] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:45.241 [2024-12-05 12:56:27.755341] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:24:45.241 [2024-12-05 12:56:27.755352] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:45.241 [2024-12-05 12:56:27.757457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:45.241 [2024-12-05 12:56:27.757506] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:45.241 spare 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.241 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:45.241 [2024-12-05 12:56:27.763332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:45.241 [2024-12-05 12:56:27.765175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:45.241 [2024-12-05 12:56:27.765237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:45.241 [2024-12-05 12:56:27.765288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:45.241 [2024-12-05 12:56:27.765466] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:45.241 [2024-12-05 12:56:27.765480] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:45.241 [2024-12-05 12:56:27.765749] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:45.241 [2024-12-05 12:56:27.765905] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:45.241 [2024-12-05 12:56:27.765914] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:45.241 [2024-12-05 12:56:27.766048] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:45.242 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.242 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:24:45.242 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:45.242 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:45.242 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:45.242 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:45.242 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:45.242 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:45.242 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:45.242 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:45.242 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:45.242 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.242 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:45.242 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.242 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:45.242 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.242 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:45.242 "name": "raid_bdev1", 00:24:45.242 "uuid": "61e0c3f9-9ddf-492a-b732-bb8bd30ea324", 00:24:45.242 "strip_size_kb": 0, 00:24:45.242 "state": "online", 00:24:45.242 "raid_level": "raid1", 00:24:45.242 "superblock": true, 00:24:45.242 "num_base_bdevs": 4, 00:24:45.242 "num_base_bdevs_discovered": 4, 00:24:45.242 "num_base_bdevs_operational": 4, 00:24:45.242 "base_bdevs_list": [ 00:24:45.242 { 00:24:45.242 "name": "BaseBdev1", 00:24:45.242 "uuid": "d237a4c6-8f42-542a-8e68-b2016e7517cf", 00:24:45.242 "is_configured": true, 00:24:45.242 "data_offset": 2048, 00:24:45.242 "data_size": 63488 00:24:45.242 }, 00:24:45.242 { 00:24:45.242 "name": "BaseBdev2", 00:24:45.242 "uuid": "35b7e875-295b-5b21-ac8e-696e14e71831", 00:24:45.242 "is_configured": true, 00:24:45.242 "data_offset": 2048, 00:24:45.242 "data_size": 63488 00:24:45.242 }, 00:24:45.242 { 00:24:45.242 "name": "BaseBdev3", 00:24:45.242 "uuid": "c598ba26-9035-5768-842b-4d7e26a9e002", 00:24:45.242 "is_configured": true, 00:24:45.242 "data_offset": 2048, 00:24:45.242 "data_size": 63488 00:24:45.242 }, 00:24:45.242 { 00:24:45.242 "name": "BaseBdev4", 00:24:45.242 "uuid": "77f631c0-6c83-5b7d-8847-c7e32aaf8f5e", 00:24:45.242 "is_configured": true, 00:24:45.242 "data_offset": 2048, 00:24:45.242 "data_size": 63488 00:24:45.242 } 00:24:45.242 ] 00:24:45.242 }' 00:24:45.242 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:45.242 12:56:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:45.502 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:45.502 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:24:45.502 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.502 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:45.502 [2024-12-05 12:56:28.071768] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:45.502 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.761 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:24:45.761 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:45.761 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.761 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:45.761 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:45.761 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.761 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:24:45.761 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:24:45.761 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:24:45.761 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:24:45.761 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.761 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:45.761 [2024-12-05 12:56:28.131388] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:45.761 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.761 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:45.761 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:45.761 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:45.761 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:45.761 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:45.761 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:45.761 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:45.761 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:45.761 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:45.761 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:45.761 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:45.761 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.761 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:45.762 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.762 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.762 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:45.762 "name": "raid_bdev1", 00:24:45.762 "uuid": "61e0c3f9-9ddf-492a-b732-bb8bd30ea324", 00:24:45.762 "strip_size_kb": 0, 00:24:45.762 "state": "online", 00:24:45.762 "raid_level": "raid1", 00:24:45.762 "superblock": true, 00:24:45.762 "num_base_bdevs": 4, 00:24:45.762 "num_base_bdevs_discovered": 3, 00:24:45.762 "num_base_bdevs_operational": 3, 00:24:45.762 "base_bdevs_list": [ 00:24:45.762 { 00:24:45.762 "name": null, 00:24:45.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:45.762 "is_configured": false, 00:24:45.762 "data_offset": 0, 00:24:45.762 "data_size": 63488 00:24:45.762 }, 00:24:45.762 { 00:24:45.762 "name": "BaseBdev2", 00:24:45.762 "uuid": "35b7e875-295b-5b21-ac8e-696e14e71831", 00:24:45.762 "is_configured": true, 00:24:45.762 "data_offset": 2048, 00:24:45.762 "data_size": 63488 00:24:45.762 }, 00:24:45.762 { 00:24:45.762 "name": "BaseBdev3", 00:24:45.762 "uuid": "c598ba26-9035-5768-842b-4d7e26a9e002", 00:24:45.762 "is_configured": true, 00:24:45.762 "data_offset": 2048, 00:24:45.762 "data_size": 63488 00:24:45.762 }, 00:24:45.762 { 00:24:45.762 "name": "BaseBdev4", 00:24:45.762 "uuid": "77f631c0-6c83-5b7d-8847-c7e32aaf8f5e", 00:24:45.762 "is_configured": true, 00:24:45.762 "data_offset": 2048, 00:24:45.762 "data_size": 63488 00:24:45.762 } 00:24:45.762 ] 00:24:45.762 }' 00:24:45.762 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:45.762 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:45.762 [2024-12-05 12:56:28.224663] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:24:45.762 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:45.762 Zero copy mechanism will not be used. 00:24:45.762 Running I/O for 60 seconds... 00:24:46.022 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:46.022 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.022 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:46.022 [2024-12-05 12:56:28.455057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:46.022 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.022 12:56:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:24:46.022 [2024-12-05 12:56:28.539013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:24:46.022 [2024-12-05 12:56:28.541024] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:46.280 [2024-12-05 12:56:28.650970] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:46.280 [2024-12-05 12:56:28.652210] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:46.280 [2024-12-05 12:56:28.855984] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:46.280 [2024-12-05 12:56:28.856635] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:46.843 137.00 IOPS, 411.00 MiB/s [2024-12-05T12:56:29.430Z] [2024-12-05 12:56:29.324215] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:47.100 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:47.100 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:47.100 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:47.100 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:47.100 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:47.100 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:47.100 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:47.100 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.100 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:47.100 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.100 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:47.100 "name": "raid_bdev1", 00:24:47.100 "uuid": "61e0c3f9-9ddf-492a-b732-bb8bd30ea324", 00:24:47.100 "strip_size_kb": 0, 00:24:47.100 "state": "online", 00:24:47.100 "raid_level": "raid1", 00:24:47.100 "superblock": true, 00:24:47.100 "num_base_bdevs": 4, 00:24:47.100 "num_base_bdevs_discovered": 4, 00:24:47.100 "num_base_bdevs_operational": 4, 00:24:47.100 "process": { 00:24:47.100 "type": "rebuild", 00:24:47.100 "target": "spare", 00:24:47.100 "progress": { 00:24:47.100 "blocks": 10240, 00:24:47.100 "percent": 16 00:24:47.100 } 00:24:47.100 }, 00:24:47.100 "base_bdevs_list": [ 00:24:47.100 { 00:24:47.100 "name": "spare", 00:24:47.100 "uuid": "63ede2bc-e206-529b-8f68-e9f38d481394", 00:24:47.100 "is_configured": true, 00:24:47.100 "data_offset": 2048, 00:24:47.100 "data_size": 63488 00:24:47.100 }, 00:24:47.100 { 00:24:47.100 "name": "BaseBdev2", 00:24:47.100 "uuid": "35b7e875-295b-5b21-ac8e-696e14e71831", 00:24:47.100 "is_configured": true, 00:24:47.100 "data_offset": 2048, 00:24:47.100 "data_size": 63488 00:24:47.100 }, 00:24:47.100 { 00:24:47.100 "name": "BaseBdev3", 00:24:47.100 "uuid": "c598ba26-9035-5768-842b-4d7e26a9e002", 00:24:47.100 "is_configured": true, 00:24:47.100 "data_offset": 2048, 00:24:47.100 "data_size": 63488 00:24:47.100 }, 00:24:47.100 { 00:24:47.100 "name": "BaseBdev4", 00:24:47.100 "uuid": "77f631c0-6c83-5b7d-8847-c7e32aaf8f5e", 00:24:47.100 "is_configured": true, 00:24:47.100 "data_offset": 2048, 00:24:47.100 "data_size": 63488 00:24:47.100 } 00:24:47.100 ] 00:24:47.100 }' 00:24:47.100 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:47.100 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:47.100 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:47.100 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:47.100 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:47.100 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.100 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:47.100 [2024-12-05 12:56:29.621728] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:47.356 [2024-12-05 12:56:29.768330] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:47.356 [2024-12-05 12:56:29.779431] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:47.356 [2024-12-05 12:56:29.779597] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:47.356 [2024-12-05 12:56:29.779617] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:47.356 [2024-12-05 12:56:29.806533] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:24:47.356 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.356 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:47.356 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:47.356 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:47.356 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:47.356 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:47.356 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:47.356 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:47.356 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:47.356 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:47.356 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:47.356 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:47.356 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:47.356 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.356 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:47.356 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.356 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:47.357 "name": "raid_bdev1", 00:24:47.357 "uuid": "61e0c3f9-9ddf-492a-b732-bb8bd30ea324", 00:24:47.357 "strip_size_kb": 0, 00:24:47.357 "state": "online", 00:24:47.357 "raid_level": "raid1", 00:24:47.357 "superblock": true, 00:24:47.357 "num_base_bdevs": 4, 00:24:47.357 "num_base_bdevs_discovered": 3, 00:24:47.357 "num_base_bdevs_operational": 3, 00:24:47.357 "base_bdevs_list": [ 00:24:47.357 { 00:24:47.357 "name": null, 00:24:47.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:47.357 "is_configured": false, 00:24:47.357 "data_offset": 0, 00:24:47.357 "data_size": 63488 00:24:47.357 }, 00:24:47.357 { 00:24:47.357 "name": "BaseBdev2", 00:24:47.357 "uuid": "35b7e875-295b-5b21-ac8e-696e14e71831", 00:24:47.357 "is_configured": true, 00:24:47.357 "data_offset": 2048, 00:24:47.357 "data_size": 63488 00:24:47.357 }, 00:24:47.357 { 00:24:47.357 "name": "BaseBdev3", 00:24:47.357 "uuid": "c598ba26-9035-5768-842b-4d7e26a9e002", 00:24:47.357 "is_configured": true, 00:24:47.357 "data_offset": 2048, 00:24:47.357 "data_size": 63488 00:24:47.357 }, 00:24:47.357 { 00:24:47.357 "name": "BaseBdev4", 00:24:47.357 "uuid": "77f631c0-6c83-5b7d-8847-c7e32aaf8f5e", 00:24:47.357 "is_configured": true, 00:24:47.357 "data_offset": 2048, 00:24:47.357 "data_size": 63488 00:24:47.357 } 00:24:47.357 ] 00:24:47.357 }' 00:24:47.357 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:47.357 12:56:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:47.613 12:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:47.613 12:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:47.613 12:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:47.613 12:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:47.613 12:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:47.613 12:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:47.613 12:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:47.613 12:56:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.613 12:56:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:47.613 12:56:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.613 12:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:47.613 "name": "raid_bdev1", 00:24:47.613 "uuid": "61e0c3f9-9ddf-492a-b732-bb8bd30ea324", 00:24:47.613 "strip_size_kb": 0, 00:24:47.613 "state": "online", 00:24:47.613 "raid_level": "raid1", 00:24:47.613 "superblock": true, 00:24:47.613 "num_base_bdevs": 4, 00:24:47.613 "num_base_bdevs_discovered": 3, 00:24:47.613 "num_base_bdevs_operational": 3, 00:24:47.613 "base_bdevs_list": [ 00:24:47.613 { 00:24:47.613 "name": null, 00:24:47.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:47.613 "is_configured": false, 00:24:47.613 "data_offset": 0, 00:24:47.613 "data_size": 63488 00:24:47.613 }, 00:24:47.613 { 00:24:47.613 "name": "BaseBdev2", 00:24:47.613 "uuid": "35b7e875-295b-5b21-ac8e-696e14e71831", 00:24:47.613 "is_configured": true, 00:24:47.613 "data_offset": 2048, 00:24:47.613 "data_size": 63488 00:24:47.613 }, 00:24:47.613 { 00:24:47.613 "name": "BaseBdev3", 00:24:47.613 "uuid": "c598ba26-9035-5768-842b-4d7e26a9e002", 00:24:47.613 "is_configured": true, 00:24:47.613 "data_offset": 2048, 00:24:47.613 "data_size": 63488 00:24:47.613 }, 00:24:47.613 { 00:24:47.613 "name": "BaseBdev4", 00:24:47.613 "uuid": "77f631c0-6c83-5b7d-8847-c7e32aaf8f5e", 00:24:47.613 "is_configured": true, 00:24:47.613 "data_offset": 2048, 00:24:47.613 "data_size": 63488 00:24:47.613 } 00:24:47.613 ] 00:24:47.613 }' 00:24:47.613 12:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:47.613 12:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:47.613 12:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:47.870 12:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:47.870 12:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:47.870 12:56:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.870 12:56:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:47.870 [2024-12-05 12:56:30.222393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:47.870 138.50 IOPS, 415.50 MiB/s [2024-12-05T12:56:30.457Z] 12:56:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.870 12:56:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:24:47.870 [2024-12-05 12:56:30.269422] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:24:47.870 [2024-12-05 12:56:30.271391] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:47.870 [2024-12-05 12:56:30.395452] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:47.870 [2024-12-05 12:56:30.396606] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:48.128 [2024-12-05 12:56:30.608147] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:48.128 [2024-12-05 12:56:30.608915] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:48.384 [2024-12-05 12:56:30.955937] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:48.642 [2024-12-05 12:56:31.175639] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:48.642 [2024-12-05 12:56:31.176032] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:48.899 125.67 IOPS, 377.00 MiB/s [2024-12-05T12:56:31.486Z] 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:48.899 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:48.899 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:48.899 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:48.899 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:48.899 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:48.899 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:48.899 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.899 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:48.899 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.899 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:48.899 "name": "raid_bdev1", 00:24:48.899 "uuid": "61e0c3f9-9ddf-492a-b732-bb8bd30ea324", 00:24:48.899 "strip_size_kb": 0, 00:24:48.899 "state": "online", 00:24:48.899 "raid_level": "raid1", 00:24:48.899 "superblock": true, 00:24:48.899 "num_base_bdevs": 4, 00:24:48.899 "num_base_bdevs_discovered": 4, 00:24:48.899 "num_base_bdevs_operational": 4, 00:24:48.899 "process": { 00:24:48.899 "type": "rebuild", 00:24:48.899 "target": "spare", 00:24:48.899 "progress": { 00:24:48.899 "blocks": 10240, 00:24:48.899 "percent": 16 00:24:48.899 } 00:24:48.899 }, 00:24:48.899 "base_bdevs_list": [ 00:24:48.899 { 00:24:48.899 "name": "spare", 00:24:48.899 "uuid": "63ede2bc-e206-529b-8f68-e9f38d481394", 00:24:48.899 "is_configured": true, 00:24:48.899 "data_offset": 2048, 00:24:48.899 "data_size": 63488 00:24:48.899 }, 00:24:48.899 { 00:24:48.899 "name": "BaseBdev2", 00:24:48.899 "uuid": "35b7e875-295b-5b21-ac8e-696e14e71831", 00:24:48.899 "is_configured": true, 00:24:48.899 "data_offset": 2048, 00:24:48.899 "data_size": 63488 00:24:48.899 }, 00:24:48.899 { 00:24:48.899 "name": "BaseBdev3", 00:24:48.899 "uuid": "c598ba26-9035-5768-842b-4d7e26a9e002", 00:24:48.899 "is_configured": true, 00:24:48.899 "data_offset": 2048, 00:24:48.899 "data_size": 63488 00:24:48.899 }, 00:24:48.899 { 00:24:48.899 "name": "BaseBdev4", 00:24:48.900 "uuid": "77f631c0-6c83-5b7d-8847-c7e32aaf8f5e", 00:24:48.900 "is_configured": true, 00:24:48.900 "data_offset": 2048, 00:24:48.900 "data_size": 63488 00:24:48.900 } 00:24:48.900 ] 00:24:48.900 }' 00:24:48.900 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:48.900 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:48.900 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:48.900 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:48.900 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:24:48.900 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:24:48.900 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:24:48.900 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:24:48.900 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:24:48.900 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:24:48.900 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:24:48.900 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.900 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:48.900 [2024-12-05 12:56:31.356274] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:49.158 [2024-12-05 12:56:31.510833] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:24:49.158 [2024-12-05 12:56:31.510877] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:24:49.158 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.158 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:24:49.158 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:24:49.158 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:49.158 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:49.158 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:49.158 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:49.158 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:49.158 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:49.158 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.158 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:49.158 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:49.158 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.158 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:49.158 "name": "raid_bdev1", 00:24:49.158 "uuid": "61e0c3f9-9ddf-492a-b732-bb8bd30ea324", 00:24:49.158 "strip_size_kb": 0, 00:24:49.158 "state": "online", 00:24:49.158 "raid_level": "raid1", 00:24:49.158 "superblock": true, 00:24:49.158 "num_base_bdevs": 4, 00:24:49.158 "num_base_bdevs_discovered": 3, 00:24:49.158 "num_base_bdevs_operational": 3, 00:24:49.158 "process": { 00:24:49.158 "type": "rebuild", 00:24:49.158 "target": "spare", 00:24:49.158 "progress": { 00:24:49.158 "blocks": 12288, 00:24:49.158 "percent": 19 00:24:49.158 } 00:24:49.158 }, 00:24:49.158 "base_bdevs_list": [ 00:24:49.158 { 00:24:49.158 "name": "spare", 00:24:49.158 "uuid": "63ede2bc-e206-529b-8f68-e9f38d481394", 00:24:49.158 "is_configured": true, 00:24:49.158 "data_offset": 2048, 00:24:49.158 "data_size": 63488 00:24:49.158 }, 00:24:49.158 { 00:24:49.158 "name": null, 00:24:49.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:49.158 "is_configured": false, 00:24:49.158 "data_offset": 0, 00:24:49.158 "data_size": 63488 00:24:49.158 }, 00:24:49.158 { 00:24:49.158 "name": "BaseBdev3", 00:24:49.158 "uuid": "c598ba26-9035-5768-842b-4d7e26a9e002", 00:24:49.158 "is_configured": true, 00:24:49.158 "data_offset": 2048, 00:24:49.158 "data_size": 63488 00:24:49.158 }, 00:24:49.158 { 00:24:49.158 "name": "BaseBdev4", 00:24:49.158 "uuid": "77f631c0-6c83-5b7d-8847-c7e32aaf8f5e", 00:24:49.158 "is_configured": true, 00:24:49.158 "data_offset": 2048, 00:24:49.158 "data_size": 63488 00:24:49.158 } 00:24:49.158 ] 00:24:49.158 }' 00:24:49.158 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:49.158 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:49.158 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:49.158 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:49.158 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=384 00:24:49.158 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:49.158 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:49.158 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:49.158 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:49.158 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:49.158 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:49.158 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:49.158 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.158 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:49.158 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:49.158 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.158 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:49.158 "name": "raid_bdev1", 00:24:49.158 "uuid": "61e0c3f9-9ddf-492a-b732-bb8bd30ea324", 00:24:49.158 "strip_size_kb": 0, 00:24:49.158 "state": "online", 00:24:49.158 "raid_level": "raid1", 00:24:49.158 "superblock": true, 00:24:49.158 "num_base_bdevs": 4, 00:24:49.158 "num_base_bdevs_discovered": 3, 00:24:49.158 "num_base_bdevs_operational": 3, 00:24:49.158 "process": { 00:24:49.158 "type": "rebuild", 00:24:49.158 "target": "spare", 00:24:49.158 "progress": { 00:24:49.158 "blocks": 12288, 00:24:49.158 "percent": 19 00:24:49.158 } 00:24:49.158 }, 00:24:49.158 "base_bdevs_list": [ 00:24:49.158 { 00:24:49.158 "name": "spare", 00:24:49.158 "uuid": "63ede2bc-e206-529b-8f68-e9f38d481394", 00:24:49.158 "is_configured": true, 00:24:49.158 "data_offset": 2048, 00:24:49.158 "data_size": 63488 00:24:49.158 }, 00:24:49.158 { 00:24:49.158 "name": null, 00:24:49.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:49.158 "is_configured": false, 00:24:49.158 "data_offset": 0, 00:24:49.158 "data_size": 63488 00:24:49.158 }, 00:24:49.158 { 00:24:49.158 "name": "BaseBdev3", 00:24:49.158 "uuid": "c598ba26-9035-5768-842b-4d7e26a9e002", 00:24:49.158 "is_configured": true, 00:24:49.158 "data_offset": 2048, 00:24:49.158 "data_size": 63488 00:24:49.158 }, 00:24:49.158 { 00:24:49.158 "name": "BaseBdev4", 00:24:49.158 "uuid": "77f631c0-6c83-5b7d-8847-c7e32aaf8f5e", 00:24:49.158 "is_configured": true, 00:24:49.158 "data_offset": 2048, 00:24:49.158 "data_size": 63488 00:24:49.158 } 00:24:49.158 ] 00:24:49.158 }' 00:24:49.158 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:49.158 [2024-12-05 12:56:31.653029] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:49.158 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:49.158 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:49.158 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:49.158 12:56:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:49.417 [2024-12-05 12:56:31.755062] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:49.673 [2024-12-05 12:56:32.124888] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:24:49.930 109.00 IOPS, 327.00 MiB/s [2024-12-05T12:56:32.517Z] [2024-12-05 12:56:32.451460] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:24:50.187 [2024-12-05 12:56:32.574879] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:24:50.187 [2024-12-05 12:56:32.575211] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:24:50.187 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:50.187 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:50.187 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:50.187 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:50.187 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:50.187 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:50.187 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:50.187 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.187 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:50.187 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:50.187 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.187 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:50.187 "name": "raid_bdev1", 00:24:50.187 "uuid": "61e0c3f9-9ddf-492a-b732-bb8bd30ea324", 00:24:50.187 "strip_size_kb": 0, 00:24:50.187 "state": "online", 00:24:50.187 "raid_level": "raid1", 00:24:50.187 "superblock": true, 00:24:50.187 "num_base_bdevs": 4, 00:24:50.187 "num_base_bdevs_discovered": 3, 00:24:50.187 "num_base_bdevs_operational": 3, 00:24:50.187 "process": { 00:24:50.187 "type": "rebuild", 00:24:50.187 "target": "spare", 00:24:50.187 "progress": { 00:24:50.187 "blocks": 28672, 00:24:50.187 "percent": 45 00:24:50.187 } 00:24:50.187 }, 00:24:50.187 "base_bdevs_list": [ 00:24:50.187 { 00:24:50.187 "name": "spare", 00:24:50.187 "uuid": "63ede2bc-e206-529b-8f68-e9f38d481394", 00:24:50.187 "is_configured": true, 00:24:50.187 "data_offset": 2048, 00:24:50.187 "data_size": 63488 00:24:50.187 }, 00:24:50.187 { 00:24:50.187 "name": null, 00:24:50.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:50.187 "is_configured": false, 00:24:50.187 "data_offset": 0, 00:24:50.187 "data_size": 63488 00:24:50.187 }, 00:24:50.187 { 00:24:50.187 "name": "BaseBdev3", 00:24:50.187 "uuid": "c598ba26-9035-5768-842b-4d7e26a9e002", 00:24:50.187 "is_configured": true, 00:24:50.187 "data_offset": 2048, 00:24:50.187 "data_size": 63488 00:24:50.187 }, 00:24:50.187 { 00:24:50.187 "name": "BaseBdev4", 00:24:50.187 "uuid": "77f631c0-6c83-5b7d-8847-c7e32aaf8f5e", 00:24:50.187 "is_configured": true, 00:24:50.187 "data_offset": 2048, 00:24:50.187 "data_size": 63488 00:24:50.187 } 00:24:50.187 ] 00:24:50.187 }' 00:24:50.187 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:50.445 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:50.445 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:50.445 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:50.445 12:56:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:50.445 [2024-12-05 12:56:32.882159] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:24:50.445 [2024-12-05 12:56:32.882565] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:24:50.703 [2024-12-05 12:56:33.096728] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:24:50.703 [2024-12-05 12:56:33.097232] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:24:50.960 96.00 IOPS, 288.00 MiB/s [2024-12-05T12:56:33.547Z] [2024-12-05 12:56:33.438264] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:24:51.524 12:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:51.524 12:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:51.524 12:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:51.524 12:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:51.524 12:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:51.524 12:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:51.524 12:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:51.524 12:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.524 12:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:51.524 12:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.524 12:56:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.524 12:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:51.524 "name": "raid_bdev1", 00:24:51.524 "uuid": "61e0c3f9-9ddf-492a-b732-bb8bd30ea324", 00:24:51.524 "strip_size_kb": 0, 00:24:51.524 "state": "online", 00:24:51.524 "raid_level": "raid1", 00:24:51.524 "superblock": true, 00:24:51.524 "num_base_bdevs": 4, 00:24:51.524 "num_base_bdevs_discovered": 3, 00:24:51.524 "num_base_bdevs_operational": 3, 00:24:51.524 "process": { 00:24:51.524 "type": "rebuild", 00:24:51.524 "target": "spare", 00:24:51.524 "progress": { 00:24:51.524 "blocks": 45056, 00:24:51.524 "percent": 70 00:24:51.524 } 00:24:51.524 }, 00:24:51.524 "base_bdevs_list": [ 00:24:51.524 { 00:24:51.524 "name": "spare", 00:24:51.524 "uuid": "63ede2bc-e206-529b-8f68-e9f38d481394", 00:24:51.524 "is_configured": true, 00:24:51.524 "data_offset": 2048, 00:24:51.524 "data_size": 63488 00:24:51.524 }, 00:24:51.524 { 00:24:51.524 "name": null, 00:24:51.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:51.524 "is_configured": false, 00:24:51.524 "data_offset": 0, 00:24:51.524 "data_size": 63488 00:24:51.524 }, 00:24:51.524 { 00:24:51.524 "name": "BaseBdev3", 00:24:51.524 "uuid": "c598ba26-9035-5768-842b-4d7e26a9e002", 00:24:51.524 "is_configured": true, 00:24:51.524 "data_offset": 2048, 00:24:51.524 "data_size": 63488 00:24:51.524 }, 00:24:51.524 { 00:24:51.524 "name": "BaseBdev4", 00:24:51.524 "uuid": "77f631c0-6c83-5b7d-8847-c7e32aaf8f5e", 00:24:51.524 "is_configured": true, 00:24:51.524 "data_offset": 2048, 00:24:51.524 "data_size": 63488 00:24:51.524 } 00:24:51.525 ] 00:24:51.525 }' 00:24:51.525 12:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:51.525 12:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:51.525 12:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:51.525 12:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:51.525 12:56:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:52.345 84.83 IOPS, 254.50 MiB/s [2024-12-05T12:56:34.932Z] [2024-12-05 12:56:34.728683] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:52.345 [2024-12-05 12:56:34.824510] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:52.345 [2024-12-05 12:56:34.827484] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:52.602 12:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:52.602 12:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:52.602 12:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:52.602 12:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:52.602 12:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:52.602 12:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:52.602 12:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:52.602 12:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:52.602 12:56:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.602 12:56:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:52.602 12:56:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.602 12:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:52.602 "name": "raid_bdev1", 00:24:52.602 "uuid": "61e0c3f9-9ddf-492a-b732-bb8bd30ea324", 00:24:52.602 "strip_size_kb": 0, 00:24:52.602 "state": "online", 00:24:52.602 "raid_level": "raid1", 00:24:52.602 "superblock": true, 00:24:52.602 "num_base_bdevs": 4, 00:24:52.602 "num_base_bdevs_discovered": 3, 00:24:52.602 "num_base_bdevs_operational": 3, 00:24:52.602 "base_bdevs_list": [ 00:24:52.602 { 00:24:52.602 "name": "spare", 00:24:52.602 "uuid": "63ede2bc-e206-529b-8f68-e9f38d481394", 00:24:52.602 "is_configured": true, 00:24:52.602 "data_offset": 2048, 00:24:52.602 "data_size": 63488 00:24:52.602 }, 00:24:52.602 { 00:24:52.602 "name": null, 00:24:52.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:52.602 "is_configured": false, 00:24:52.602 "data_offset": 0, 00:24:52.602 "data_size": 63488 00:24:52.602 }, 00:24:52.602 { 00:24:52.602 "name": "BaseBdev3", 00:24:52.602 "uuid": "c598ba26-9035-5768-842b-4d7e26a9e002", 00:24:52.602 "is_configured": true, 00:24:52.602 "data_offset": 2048, 00:24:52.602 "data_size": 63488 00:24:52.602 }, 00:24:52.602 { 00:24:52.602 "name": "BaseBdev4", 00:24:52.602 "uuid": "77f631c0-6c83-5b7d-8847-c7e32aaf8f5e", 00:24:52.602 "is_configured": true, 00:24:52.602 "data_offset": 2048, 00:24:52.602 "data_size": 63488 00:24:52.602 } 00:24:52.602 ] 00:24:52.602 }' 00:24:52.602 12:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:52.602 12:56:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:52.602 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:52.602 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:24:52.602 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:24:52.602 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:52.602 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:52.602 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:52.602 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:52.602 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:52.602 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:52.602 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:52.602 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.602 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:52.602 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.602 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:52.602 "name": "raid_bdev1", 00:24:52.602 "uuid": "61e0c3f9-9ddf-492a-b732-bb8bd30ea324", 00:24:52.602 "strip_size_kb": 0, 00:24:52.602 "state": "online", 00:24:52.602 "raid_level": "raid1", 00:24:52.602 "superblock": true, 00:24:52.602 "num_base_bdevs": 4, 00:24:52.602 "num_base_bdevs_discovered": 3, 00:24:52.602 "num_base_bdevs_operational": 3, 00:24:52.602 "base_bdevs_list": [ 00:24:52.602 { 00:24:52.602 "name": "spare", 00:24:52.602 "uuid": "63ede2bc-e206-529b-8f68-e9f38d481394", 00:24:52.602 "is_configured": true, 00:24:52.602 "data_offset": 2048, 00:24:52.602 "data_size": 63488 00:24:52.602 }, 00:24:52.602 { 00:24:52.602 "name": null, 00:24:52.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:52.602 "is_configured": false, 00:24:52.602 "data_offset": 0, 00:24:52.602 "data_size": 63488 00:24:52.602 }, 00:24:52.602 { 00:24:52.602 "name": "BaseBdev3", 00:24:52.602 "uuid": "c598ba26-9035-5768-842b-4d7e26a9e002", 00:24:52.602 "is_configured": true, 00:24:52.602 "data_offset": 2048, 00:24:52.602 "data_size": 63488 00:24:52.602 }, 00:24:52.602 { 00:24:52.602 "name": "BaseBdev4", 00:24:52.602 "uuid": "77f631c0-6c83-5b7d-8847-c7e32aaf8f5e", 00:24:52.602 "is_configured": true, 00:24:52.602 "data_offset": 2048, 00:24:52.602 "data_size": 63488 00:24:52.602 } 00:24:52.602 ] 00:24:52.602 }' 00:24:52.603 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:52.603 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:52.603 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:52.603 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:52.603 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:52.603 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:52.603 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:52.603 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:52.603 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:52.603 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:52.603 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:52.603 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:52.603 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:52.603 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:52.603 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:52.603 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:52.603 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.603 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:52.603 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.603 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:52.603 "name": "raid_bdev1", 00:24:52.603 "uuid": "61e0c3f9-9ddf-492a-b732-bb8bd30ea324", 00:24:52.603 "strip_size_kb": 0, 00:24:52.603 "state": "online", 00:24:52.603 "raid_level": "raid1", 00:24:52.603 "superblock": true, 00:24:52.603 "num_base_bdevs": 4, 00:24:52.603 "num_base_bdevs_discovered": 3, 00:24:52.603 "num_base_bdevs_operational": 3, 00:24:52.603 "base_bdevs_list": [ 00:24:52.603 { 00:24:52.603 "name": "spare", 00:24:52.603 "uuid": "63ede2bc-e206-529b-8f68-e9f38d481394", 00:24:52.603 "is_configured": true, 00:24:52.603 "data_offset": 2048, 00:24:52.603 "data_size": 63488 00:24:52.603 }, 00:24:52.603 { 00:24:52.603 "name": null, 00:24:52.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:52.603 "is_configured": false, 00:24:52.603 "data_offset": 0, 00:24:52.603 "data_size": 63488 00:24:52.603 }, 00:24:52.603 { 00:24:52.603 "name": "BaseBdev3", 00:24:52.603 "uuid": "c598ba26-9035-5768-842b-4d7e26a9e002", 00:24:52.603 "is_configured": true, 00:24:52.603 "data_offset": 2048, 00:24:52.603 "data_size": 63488 00:24:52.603 }, 00:24:52.603 { 00:24:52.603 "name": "BaseBdev4", 00:24:52.603 "uuid": "77f631c0-6c83-5b7d-8847-c7e32aaf8f5e", 00:24:52.603 "is_configured": true, 00:24:52.603 "data_offset": 2048, 00:24:52.603 "data_size": 63488 00:24:52.603 } 00:24:52.603 ] 00:24:52.603 }' 00:24:52.603 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:52.603 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:52.859 76.71 IOPS, 230.14 MiB/s [2024-12-05T12:56:35.446Z] 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:52.859 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.859 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:52.859 [2024-12-05 12:56:35.437528] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:52.859 [2024-12-05 12:56:35.437554] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:53.116 00:24:53.116 Latency(us) 00:24:53.116 [2024-12-05T12:56:35.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.116 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:24:53.116 raid_bdev1 : 7.24 76.08 228.23 0.00 0.00 17079.41 297.75 114536.76 00:24:53.116 [2024-12-05T12:56:35.703Z] =================================================================================================================== 00:24:53.116 [2024-12-05T12:56:35.703Z] Total : 76.08 228.23 0.00 0.00 17079.41 297.75 114536.76 00:24:53.116 { 00:24:53.116 "results": [ 00:24:53.116 { 00:24:53.116 "job": "raid_bdev1", 00:24:53.116 "core_mask": "0x1", 00:24:53.116 "workload": "randrw", 00:24:53.116 "percentage": 50, 00:24:53.116 "status": "finished", 00:24:53.116 "queue_depth": 2, 00:24:53.116 "io_size": 3145728, 00:24:53.116 "runtime": 7.242599, 00:24:53.116 "iops": 76.07766217624363, 00:24:53.116 "mibps": 228.23298652873092, 00:24:53.116 "io_failed": 0, 00:24:53.116 "io_timeout": 0, 00:24:53.116 "avg_latency_us": 17079.405076085437, 00:24:53.116 "min_latency_us": 297.7476923076923, 00:24:53.116 "max_latency_us": 114536.76307692307 00:24:53.116 } 00:24:53.116 ], 00:24:53.116 "core_count": 1 00:24:53.116 } 00:24:53.116 [2024-12-05 12:56:35.482171] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:53.116 [2024-12-05 12:56:35.482228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:53.116 [2024-12-05 12:56:35.482316] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:53.116 [2024-12-05 12:56:35.482325] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:53.116 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.116 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:53.116 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.116 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:53.116 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:24:53.116 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.116 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:24:53.116 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:53.116 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:24:53.116 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:24:53.116 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:53.116 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:24:53.116 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:53.116 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:53.116 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:53.116 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:24:53.116 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:53.116 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:53.116 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:24:53.372 /dev/nbd0 00:24:53.372 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:53.372 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:53.372 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:53.372 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:24:53.372 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:53.372 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:53.372 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:53.372 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:24:53.372 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:53.372 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:53.372 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:53.372 1+0 records in 00:24:53.372 1+0 records out 00:24:53.372 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345721 s, 11.8 MB/s 00:24:53.372 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:53.372 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:24:53.372 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:53.372 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:53.372 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:24:53.372 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:53.372 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:53.372 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:24:53.372 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:24:53.372 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:24:53.372 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:24:53.372 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:24:53.372 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:24:53.372 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:53.372 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:24:53.372 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:53.372 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:53.372 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:53.372 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:24:53.372 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:53.372 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:53.372 12:56:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:24:53.628 /dev/nbd1 00:24:53.628 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:53.628 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:53.628 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:24:53.628 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:24:53.628 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:53.628 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:53.628 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:24:53.628 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:24:53.628 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:53.628 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:53.628 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:53.628 1+0 records in 00:24:53.628 1+0 records out 00:24:53.628 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293633 s, 13.9 MB/s 00:24:53.628 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:53.628 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:24:53.628 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:53.628 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:53.628 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:24:53.628 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:53.628 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:53.628 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:53.628 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:24:53.628 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:53.628 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:53.628 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:53.628 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:24:53.628 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:53.628 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:24:53.894 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:53.894 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:53.894 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:53.894 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:53.894 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:53.894 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:53.894 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:24:53.894 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:53.894 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:24:53.894 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:24:53.894 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:24:53.894 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:53.894 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:24:53.894 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:53.894 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:53.894 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:53.894 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:24:53.894 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:53.894 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:53.894 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:24:54.153 /dev/nbd1 00:24:54.153 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:54.153 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:54.153 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:24:54.153 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:24:54.153 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:54.153 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:54.153 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:24:54.153 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:24:54.153 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:54.153 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:54.153 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:54.153 1+0 records in 00:24:54.153 1+0 records out 00:24:54.153 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281101 s, 14.6 MB/s 00:24:54.153 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:54.153 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:24:54.153 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:54.153 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:54.153 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:24:54.153 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:54.153 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:54.153 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:54.153 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:24:54.153 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:54.153 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:54.153 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:54.153 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:24:54.153 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:54.153 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:24:54.410 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:54.410 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:54.410 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:54.410 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:54.410 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:54.410 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:54.410 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:24:54.410 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:54.410 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:24:54.410 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:54.410 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:54.410 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:54.410 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:24:54.410 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:54.410 12:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:54.667 [2024-12-05 12:56:37.079644] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:54.667 [2024-12-05 12:56:37.079690] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:54.667 [2024-12-05 12:56:37.079708] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:24:54.667 [2024-12-05 12:56:37.079715] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:54.667 [2024-12-05 12:56:37.081555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:54.667 [2024-12-05 12:56:37.081587] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:54.667 [2024-12-05 12:56:37.081661] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:54.667 [2024-12-05 12:56:37.081700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:54.667 [2024-12-05 12:56:37.081813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:54.667 [2024-12-05 12:56:37.081896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:54.667 spare 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:54.667 [2024-12-05 12:56:37.181978] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:54.667 [2024-12-05 12:56:37.182015] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:54.667 [2024-12-05 12:56:37.182300] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:24:54.667 [2024-12-05 12:56:37.182453] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:54.667 [2024-12-05 12:56:37.182472] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:24:54.667 [2024-12-05 12:56:37.182627] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:54.667 "name": "raid_bdev1", 00:24:54.667 "uuid": "61e0c3f9-9ddf-492a-b732-bb8bd30ea324", 00:24:54.667 "strip_size_kb": 0, 00:24:54.667 "state": "online", 00:24:54.667 "raid_level": "raid1", 00:24:54.667 "superblock": true, 00:24:54.667 "num_base_bdevs": 4, 00:24:54.667 "num_base_bdevs_discovered": 3, 00:24:54.667 "num_base_bdevs_operational": 3, 00:24:54.667 "base_bdevs_list": [ 00:24:54.667 { 00:24:54.667 "name": "spare", 00:24:54.667 "uuid": "63ede2bc-e206-529b-8f68-e9f38d481394", 00:24:54.667 "is_configured": true, 00:24:54.667 "data_offset": 2048, 00:24:54.667 "data_size": 63488 00:24:54.667 }, 00:24:54.667 { 00:24:54.667 "name": null, 00:24:54.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.667 "is_configured": false, 00:24:54.667 "data_offset": 2048, 00:24:54.667 "data_size": 63488 00:24:54.667 }, 00:24:54.667 { 00:24:54.667 "name": "BaseBdev3", 00:24:54.667 "uuid": "c598ba26-9035-5768-842b-4d7e26a9e002", 00:24:54.667 "is_configured": true, 00:24:54.667 "data_offset": 2048, 00:24:54.667 "data_size": 63488 00:24:54.667 }, 00:24:54.667 { 00:24:54.667 "name": "BaseBdev4", 00:24:54.667 "uuid": "77f631c0-6c83-5b7d-8847-c7e32aaf8f5e", 00:24:54.667 "is_configured": true, 00:24:54.667 "data_offset": 2048, 00:24:54.667 "data_size": 63488 00:24:54.667 } 00:24:54.667 ] 00:24:54.667 }' 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:54.667 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:54.925 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:54.925 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:54.925 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:54.925 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:54.925 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:54.925 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:54.925 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.925 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:54.925 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:54.925 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.182 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:55.182 "name": "raid_bdev1", 00:24:55.182 "uuid": "61e0c3f9-9ddf-492a-b732-bb8bd30ea324", 00:24:55.182 "strip_size_kb": 0, 00:24:55.182 "state": "online", 00:24:55.182 "raid_level": "raid1", 00:24:55.182 "superblock": true, 00:24:55.182 "num_base_bdevs": 4, 00:24:55.182 "num_base_bdevs_discovered": 3, 00:24:55.182 "num_base_bdevs_operational": 3, 00:24:55.182 "base_bdevs_list": [ 00:24:55.182 { 00:24:55.182 "name": "spare", 00:24:55.182 "uuid": "63ede2bc-e206-529b-8f68-e9f38d481394", 00:24:55.182 "is_configured": true, 00:24:55.182 "data_offset": 2048, 00:24:55.182 "data_size": 63488 00:24:55.182 }, 00:24:55.182 { 00:24:55.182 "name": null, 00:24:55.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:55.182 "is_configured": false, 00:24:55.182 "data_offset": 2048, 00:24:55.182 "data_size": 63488 00:24:55.182 }, 00:24:55.182 { 00:24:55.182 "name": "BaseBdev3", 00:24:55.182 "uuid": "c598ba26-9035-5768-842b-4d7e26a9e002", 00:24:55.182 "is_configured": true, 00:24:55.182 "data_offset": 2048, 00:24:55.182 "data_size": 63488 00:24:55.182 }, 00:24:55.182 { 00:24:55.182 "name": "BaseBdev4", 00:24:55.182 "uuid": "77f631c0-6c83-5b7d-8847-c7e32aaf8f5e", 00:24:55.182 "is_configured": true, 00:24:55.182 "data_offset": 2048, 00:24:55.182 "data_size": 63488 00:24:55.182 } 00:24:55.182 ] 00:24:55.182 }' 00:24:55.182 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:55.182 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:55.182 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:55.182 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:55.182 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:55.182 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:55.183 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.183 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:55.183 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.183 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:24:55.183 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:55.183 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.183 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:55.183 [2024-12-05 12:56:37.611851] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:55.183 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.183 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:55.183 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:55.183 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:55.183 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:55.183 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:55.183 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:55.183 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:55.183 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:55.183 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:55.183 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:55.183 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:55.183 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:55.183 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.183 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:55.183 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.183 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:55.183 "name": "raid_bdev1", 00:24:55.183 "uuid": "61e0c3f9-9ddf-492a-b732-bb8bd30ea324", 00:24:55.183 "strip_size_kb": 0, 00:24:55.183 "state": "online", 00:24:55.183 "raid_level": "raid1", 00:24:55.183 "superblock": true, 00:24:55.183 "num_base_bdevs": 4, 00:24:55.183 "num_base_bdevs_discovered": 2, 00:24:55.183 "num_base_bdevs_operational": 2, 00:24:55.183 "base_bdevs_list": [ 00:24:55.183 { 00:24:55.183 "name": null, 00:24:55.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:55.183 "is_configured": false, 00:24:55.183 "data_offset": 0, 00:24:55.183 "data_size": 63488 00:24:55.183 }, 00:24:55.183 { 00:24:55.183 "name": null, 00:24:55.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:55.183 "is_configured": false, 00:24:55.183 "data_offset": 2048, 00:24:55.183 "data_size": 63488 00:24:55.183 }, 00:24:55.183 { 00:24:55.183 "name": "BaseBdev3", 00:24:55.183 "uuid": "c598ba26-9035-5768-842b-4d7e26a9e002", 00:24:55.183 "is_configured": true, 00:24:55.183 "data_offset": 2048, 00:24:55.183 "data_size": 63488 00:24:55.183 }, 00:24:55.183 { 00:24:55.183 "name": "BaseBdev4", 00:24:55.183 "uuid": "77f631c0-6c83-5b7d-8847-c7e32aaf8f5e", 00:24:55.183 "is_configured": true, 00:24:55.183 "data_offset": 2048, 00:24:55.183 "data_size": 63488 00:24:55.183 } 00:24:55.183 ] 00:24:55.183 }' 00:24:55.183 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:55.183 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:55.440 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:55.440 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.440 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:55.440 [2024-12-05 12:56:37.947969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:55.440 [2024-12-05 12:56:37.948114] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:24:55.440 [2024-12-05 12:56:37.948126] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:55.440 [2024-12-05 12:56:37.948161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:55.440 [2024-12-05 12:56:37.955942] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:24:55.440 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.440 12:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:24:55.440 [2024-12-05 12:56:37.957536] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:56.813 12:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:56.813 12:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:56.813 12:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:56.813 12:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:56.813 12:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:56.813 12:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:56.813 12:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:56.813 12:56:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.813 12:56:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:56.813 12:56:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.813 12:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:56.813 "name": "raid_bdev1", 00:24:56.813 "uuid": "61e0c3f9-9ddf-492a-b732-bb8bd30ea324", 00:24:56.813 "strip_size_kb": 0, 00:24:56.813 "state": "online", 00:24:56.813 "raid_level": "raid1", 00:24:56.813 "superblock": true, 00:24:56.813 "num_base_bdevs": 4, 00:24:56.813 "num_base_bdevs_discovered": 3, 00:24:56.813 "num_base_bdevs_operational": 3, 00:24:56.813 "process": { 00:24:56.813 "type": "rebuild", 00:24:56.813 "target": "spare", 00:24:56.813 "progress": { 00:24:56.813 "blocks": 20480, 00:24:56.813 "percent": 32 00:24:56.813 } 00:24:56.813 }, 00:24:56.813 "base_bdevs_list": [ 00:24:56.813 { 00:24:56.813 "name": "spare", 00:24:56.813 "uuid": "63ede2bc-e206-529b-8f68-e9f38d481394", 00:24:56.813 "is_configured": true, 00:24:56.813 "data_offset": 2048, 00:24:56.813 "data_size": 63488 00:24:56.813 }, 00:24:56.813 { 00:24:56.813 "name": null, 00:24:56.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.813 "is_configured": false, 00:24:56.813 "data_offset": 2048, 00:24:56.813 "data_size": 63488 00:24:56.813 }, 00:24:56.813 { 00:24:56.813 "name": "BaseBdev3", 00:24:56.813 "uuid": "c598ba26-9035-5768-842b-4d7e26a9e002", 00:24:56.813 "is_configured": true, 00:24:56.813 "data_offset": 2048, 00:24:56.814 "data_size": 63488 00:24:56.814 }, 00:24:56.814 { 00:24:56.814 "name": "BaseBdev4", 00:24:56.814 "uuid": "77f631c0-6c83-5b7d-8847-c7e32aaf8f5e", 00:24:56.814 "is_configured": true, 00:24:56.814 "data_offset": 2048, 00:24:56.814 "data_size": 63488 00:24:56.814 } 00:24:56.814 ] 00:24:56.814 }' 00:24:56.814 12:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:56.814 12:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:56.814 12:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:56.814 12:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:56.814 12:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:24:56.814 12:56:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.814 12:56:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:56.814 [2024-12-05 12:56:39.059870] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:56.814 [2024-12-05 12:56:39.062539] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:56.814 [2024-12-05 12:56:39.062588] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:56.814 [2024-12-05 12:56:39.062602] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:56.814 [2024-12-05 12:56:39.062608] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:56.814 12:56:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.814 12:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:56.814 12:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:56.814 12:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:56.814 12:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:56.814 12:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:56.814 12:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:56.814 12:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:56.814 12:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:56.814 12:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:56.814 12:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:56.814 12:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:56.814 12:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:56.814 12:56:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.814 12:56:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:56.814 12:56:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.814 12:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:56.814 "name": "raid_bdev1", 00:24:56.814 "uuid": "61e0c3f9-9ddf-492a-b732-bb8bd30ea324", 00:24:56.814 "strip_size_kb": 0, 00:24:56.814 "state": "online", 00:24:56.814 "raid_level": "raid1", 00:24:56.814 "superblock": true, 00:24:56.814 "num_base_bdevs": 4, 00:24:56.814 "num_base_bdevs_discovered": 2, 00:24:56.814 "num_base_bdevs_operational": 2, 00:24:56.814 "base_bdevs_list": [ 00:24:56.814 { 00:24:56.814 "name": null, 00:24:56.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.814 "is_configured": false, 00:24:56.814 "data_offset": 0, 00:24:56.814 "data_size": 63488 00:24:56.814 }, 00:24:56.814 { 00:24:56.814 "name": null, 00:24:56.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.814 "is_configured": false, 00:24:56.814 "data_offset": 2048, 00:24:56.814 "data_size": 63488 00:24:56.814 }, 00:24:56.814 { 00:24:56.814 "name": "BaseBdev3", 00:24:56.814 "uuid": "c598ba26-9035-5768-842b-4d7e26a9e002", 00:24:56.814 "is_configured": true, 00:24:56.814 "data_offset": 2048, 00:24:56.814 "data_size": 63488 00:24:56.814 }, 00:24:56.814 { 00:24:56.814 "name": "BaseBdev4", 00:24:56.814 "uuid": "77f631c0-6c83-5b7d-8847-c7e32aaf8f5e", 00:24:56.814 "is_configured": true, 00:24:56.814 "data_offset": 2048, 00:24:56.814 "data_size": 63488 00:24:56.814 } 00:24:56.814 ] 00:24:56.814 }' 00:24:56.814 12:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:56.814 12:56:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:56.814 12:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:56.814 12:56:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.814 12:56:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:56.814 [2024-12-05 12:56:39.387695] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:56.814 [2024-12-05 12:56:39.387749] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:56.814 [2024-12-05 12:56:39.387774] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:24:56.814 [2024-12-05 12:56:39.387783] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:56.814 [2024-12-05 12:56:39.388152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:56.814 [2024-12-05 12:56:39.388172] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:56.814 [2024-12-05 12:56:39.388262] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:56.814 [2024-12-05 12:56:39.388271] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:24:56.814 [2024-12-05 12:56:39.388281] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:56.814 [2024-12-05 12:56:39.388295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:56.814 [2024-12-05 12:56:39.396277] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:24:56.814 spare 00:24:56.814 12:56:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.072 12:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:24:57.072 [2024-12-05 12:56:39.397826] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:58.028 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:58.028 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:58.028 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:58.028 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:58.028 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:58.028 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:58.028 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:58.028 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.028 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:58.028 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.028 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:58.028 "name": "raid_bdev1", 00:24:58.028 "uuid": "61e0c3f9-9ddf-492a-b732-bb8bd30ea324", 00:24:58.028 "strip_size_kb": 0, 00:24:58.028 "state": "online", 00:24:58.028 "raid_level": "raid1", 00:24:58.028 "superblock": true, 00:24:58.028 "num_base_bdevs": 4, 00:24:58.028 "num_base_bdevs_discovered": 3, 00:24:58.028 "num_base_bdevs_operational": 3, 00:24:58.028 "process": { 00:24:58.028 "type": "rebuild", 00:24:58.028 "target": "spare", 00:24:58.028 "progress": { 00:24:58.028 "blocks": 20480, 00:24:58.028 "percent": 32 00:24:58.028 } 00:24:58.028 }, 00:24:58.028 "base_bdevs_list": [ 00:24:58.028 { 00:24:58.028 "name": "spare", 00:24:58.028 "uuid": "63ede2bc-e206-529b-8f68-e9f38d481394", 00:24:58.028 "is_configured": true, 00:24:58.028 "data_offset": 2048, 00:24:58.028 "data_size": 63488 00:24:58.028 }, 00:24:58.028 { 00:24:58.028 "name": null, 00:24:58.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.028 "is_configured": false, 00:24:58.028 "data_offset": 2048, 00:24:58.028 "data_size": 63488 00:24:58.028 }, 00:24:58.028 { 00:24:58.028 "name": "BaseBdev3", 00:24:58.028 "uuid": "c598ba26-9035-5768-842b-4d7e26a9e002", 00:24:58.028 "is_configured": true, 00:24:58.028 "data_offset": 2048, 00:24:58.028 "data_size": 63488 00:24:58.028 }, 00:24:58.028 { 00:24:58.028 "name": "BaseBdev4", 00:24:58.028 "uuid": "77f631c0-6c83-5b7d-8847-c7e32aaf8f5e", 00:24:58.028 "is_configured": true, 00:24:58.028 "data_offset": 2048, 00:24:58.028 "data_size": 63488 00:24:58.028 } 00:24:58.028 ] 00:24:58.028 }' 00:24:58.028 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:58.028 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:58.028 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:58.028 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:58.028 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:24:58.028 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.028 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:58.028 [2024-12-05 12:56:40.508226] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:58.028 [2024-12-05 12:56:40.603360] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:58.028 [2024-12-05 12:56:40.603425] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:58.028 [2024-12-05 12:56:40.603438] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:58.028 [2024-12-05 12:56:40.603448] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:58.286 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.286 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:58.286 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:58.286 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:58.286 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:58.286 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:58.286 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:58.286 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:58.286 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:58.286 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:58.286 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:58.286 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:58.286 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:58.286 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.286 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:58.286 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.286 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:58.286 "name": "raid_bdev1", 00:24:58.286 "uuid": "61e0c3f9-9ddf-492a-b732-bb8bd30ea324", 00:24:58.286 "strip_size_kb": 0, 00:24:58.286 "state": "online", 00:24:58.286 "raid_level": "raid1", 00:24:58.286 "superblock": true, 00:24:58.286 "num_base_bdevs": 4, 00:24:58.286 "num_base_bdevs_discovered": 2, 00:24:58.286 "num_base_bdevs_operational": 2, 00:24:58.286 "base_bdevs_list": [ 00:24:58.286 { 00:24:58.286 "name": null, 00:24:58.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.286 "is_configured": false, 00:24:58.286 "data_offset": 0, 00:24:58.286 "data_size": 63488 00:24:58.286 }, 00:24:58.286 { 00:24:58.286 "name": null, 00:24:58.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.286 "is_configured": false, 00:24:58.286 "data_offset": 2048, 00:24:58.286 "data_size": 63488 00:24:58.286 }, 00:24:58.286 { 00:24:58.286 "name": "BaseBdev3", 00:24:58.286 "uuid": "c598ba26-9035-5768-842b-4d7e26a9e002", 00:24:58.286 "is_configured": true, 00:24:58.286 "data_offset": 2048, 00:24:58.286 "data_size": 63488 00:24:58.286 }, 00:24:58.286 { 00:24:58.286 "name": "BaseBdev4", 00:24:58.286 "uuid": "77f631c0-6c83-5b7d-8847-c7e32aaf8f5e", 00:24:58.286 "is_configured": true, 00:24:58.286 "data_offset": 2048, 00:24:58.286 "data_size": 63488 00:24:58.286 } 00:24:58.286 ] 00:24:58.286 }' 00:24:58.286 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:58.286 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:58.544 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:58.544 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:58.544 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:58.544 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:58.544 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:58.544 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:58.544 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:58.544 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.544 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:58.544 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.544 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:58.544 "name": "raid_bdev1", 00:24:58.544 "uuid": "61e0c3f9-9ddf-492a-b732-bb8bd30ea324", 00:24:58.544 "strip_size_kb": 0, 00:24:58.544 "state": "online", 00:24:58.544 "raid_level": "raid1", 00:24:58.544 "superblock": true, 00:24:58.544 "num_base_bdevs": 4, 00:24:58.544 "num_base_bdevs_discovered": 2, 00:24:58.544 "num_base_bdevs_operational": 2, 00:24:58.544 "base_bdevs_list": [ 00:24:58.544 { 00:24:58.544 "name": null, 00:24:58.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.544 "is_configured": false, 00:24:58.544 "data_offset": 0, 00:24:58.544 "data_size": 63488 00:24:58.544 }, 00:24:58.544 { 00:24:58.544 "name": null, 00:24:58.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.544 "is_configured": false, 00:24:58.544 "data_offset": 2048, 00:24:58.544 "data_size": 63488 00:24:58.544 }, 00:24:58.544 { 00:24:58.544 "name": "BaseBdev3", 00:24:58.544 "uuid": "c598ba26-9035-5768-842b-4d7e26a9e002", 00:24:58.544 "is_configured": true, 00:24:58.544 "data_offset": 2048, 00:24:58.544 "data_size": 63488 00:24:58.544 }, 00:24:58.544 { 00:24:58.544 "name": "BaseBdev4", 00:24:58.544 "uuid": "77f631c0-6c83-5b7d-8847-c7e32aaf8f5e", 00:24:58.544 "is_configured": true, 00:24:58.544 "data_offset": 2048, 00:24:58.544 "data_size": 63488 00:24:58.544 } 00:24:58.544 ] 00:24:58.544 }' 00:24:58.544 12:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:58.544 12:56:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:58.544 12:56:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:58.544 12:56:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:58.544 12:56:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:24:58.544 12:56:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.544 12:56:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:58.544 12:56:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.544 12:56:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:58.544 12:56:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.544 12:56:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:58.544 [2024-12-05 12:56:41.076670] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:58.544 [2024-12-05 12:56:41.076724] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:58.544 [2024-12-05 12:56:41.076742] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:24:58.544 [2024-12-05 12:56:41.076751] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:58.545 [2024-12-05 12:56:41.077123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:58.545 [2024-12-05 12:56:41.077136] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:58.545 [2024-12-05 12:56:41.077196] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:58.545 [2024-12-05 12:56:41.077209] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:24:58.545 [2024-12-05 12:56:41.077215] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:58.545 [2024-12-05 12:56:41.077225] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:24:58.545 BaseBdev1 00:24:58.545 12:56:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.545 12:56:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:24:59.919 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:59.919 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:59.919 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:59.919 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:59.919 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:59.919 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:59.919 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:59.919 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:59.919 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:59.919 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:59.919 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:59.919 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:59.919 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.919 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:59.919 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.919 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:59.919 "name": "raid_bdev1", 00:24:59.919 "uuid": "61e0c3f9-9ddf-492a-b732-bb8bd30ea324", 00:24:59.919 "strip_size_kb": 0, 00:24:59.919 "state": "online", 00:24:59.919 "raid_level": "raid1", 00:24:59.919 "superblock": true, 00:24:59.919 "num_base_bdevs": 4, 00:24:59.919 "num_base_bdevs_discovered": 2, 00:24:59.919 "num_base_bdevs_operational": 2, 00:24:59.919 "base_bdevs_list": [ 00:24:59.919 { 00:24:59.919 "name": null, 00:24:59.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:59.919 "is_configured": false, 00:24:59.919 "data_offset": 0, 00:24:59.919 "data_size": 63488 00:24:59.919 }, 00:24:59.919 { 00:24:59.919 "name": null, 00:24:59.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:59.919 "is_configured": false, 00:24:59.919 "data_offset": 2048, 00:24:59.919 "data_size": 63488 00:24:59.919 }, 00:24:59.919 { 00:24:59.919 "name": "BaseBdev3", 00:24:59.919 "uuid": "c598ba26-9035-5768-842b-4d7e26a9e002", 00:24:59.919 "is_configured": true, 00:24:59.919 "data_offset": 2048, 00:24:59.919 "data_size": 63488 00:24:59.919 }, 00:24:59.919 { 00:24:59.919 "name": "BaseBdev4", 00:24:59.919 "uuid": "77f631c0-6c83-5b7d-8847-c7e32aaf8f5e", 00:24:59.919 "is_configured": true, 00:24:59.919 "data_offset": 2048, 00:24:59.919 "data_size": 63488 00:24:59.919 } 00:24:59.919 ] 00:24:59.919 }' 00:24:59.919 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:59.919 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:59.919 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:59.919 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:59.919 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:59.919 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:59.919 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:59.919 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:59.919 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.919 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:59.919 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:59.919 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.919 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:59.919 "name": "raid_bdev1", 00:24:59.919 "uuid": "61e0c3f9-9ddf-492a-b732-bb8bd30ea324", 00:24:59.919 "strip_size_kb": 0, 00:24:59.919 "state": "online", 00:24:59.919 "raid_level": "raid1", 00:24:59.919 "superblock": true, 00:24:59.919 "num_base_bdevs": 4, 00:24:59.919 "num_base_bdevs_discovered": 2, 00:24:59.919 "num_base_bdevs_operational": 2, 00:24:59.919 "base_bdevs_list": [ 00:24:59.919 { 00:24:59.919 "name": null, 00:24:59.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:59.919 "is_configured": false, 00:24:59.919 "data_offset": 0, 00:24:59.919 "data_size": 63488 00:24:59.919 }, 00:24:59.919 { 00:24:59.919 "name": null, 00:24:59.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:59.919 "is_configured": false, 00:24:59.919 "data_offset": 2048, 00:24:59.919 "data_size": 63488 00:24:59.919 }, 00:24:59.919 { 00:24:59.919 "name": "BaseBdev3", 00:24:59.919 "uuid": "c598ba26-9035-5768-842b-4d7e26a9e002", 00:24:59.919 "is_configured": true, 00:24:59.919 "data_offset": 2048, 00:24:59.919 "data_size": 63488 00:24:59.919 }, 00:24:59.919 { 00:24:59.919 "name": "BaseBdev4", 00:24:59.919 "uuid": "77f631c0-6c83-5b7d-8847-c7e32aaf8f5e", 00:24:59.919 "is_configured": true, 00:24:59.919 "data_offset": 2048, 00:24:59.919 "data_size": 63488 00:24:59.919 } 00:24:59.919 ] 00:24:59.919 }' 00:24:59.919 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:59.919 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:59.919 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:59.919 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:59.919 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:59.919 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:24:59.919 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:59.919 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:59.920 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:59.920 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:59.920 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:59.920 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:59.920 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.920 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:59.920 [2024-12-05 12:56:42.501136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:59.920 [2024-12-05 12:56:42.501257] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:24:59.920 [2024-12-05 12:56:42.501267] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:00.179 request: 00:25:00.179 { 00:25:00.179 "base_bdev": "BaseBdev1", 00:25:00.179 "raid_bdev": "raid_bdev1", 00:25:00.179 "method": "bdev_raid_add_base_bdev", 00:25:00.179 "req_id": 1 00:25:00.179 } 00:25:00.179 Got JSON-RPC error response 00:25:00.179 response: 00:25:00.179 { 00:25:00.179 "code": -22, 00:25:00.179 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:25:00.179 } 00:25:00.179 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:00.179 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:25:00.179 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:00.179 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:00.179 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:00.179 12:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:25:01.178 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:01.178 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:01.178 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:01.178 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:01.178 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:01.178 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:01.178 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:01.178 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:01.178 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:01.178 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:01.178 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:01.178 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.178 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.178 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:01.178 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.178 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:01.178 "name": "raid_bdev1", 00:25:01.178 "uuid": "61e0c3f9-9ddf-492a-b732-bb8bd30ea324", 00:25:01.178 "strip_size_kb": 0, 00:25:01.178 "state": "online", 00:25:01.178 "raid_level": "raid1", 00:25:01.178 "superblock": true, 00:25:01.178 "num_base_bdevs": 4, 00:25:01.178 "num_base_bdevs_discovered": 2, 00:25:01.178 "num_base_bdevs_operational": 2, 00:25:01.178 "base_bdevs_list": [ 00:25:01.178 { 00:25:01.178 "name": null, 00:25:01.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.178 "is_configured": false, 00:25:01.178 "data_offset": 0, 00:25:01.178 "data_size": 63488 00:25:01.178 }, 00:25:01.178 { 00:25:01.178 "name": null, 00:25:01.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.178 "is_configured": false, 00:25:01.178 "data_offset": 2048, 00:25:01.178 "data_size": 63488 00:25:01.178 }, 00:25:01.178 { 00:25:01.178 "name": "BaseBdev3", 00:25:01.178 "uuid": "c598ba26-9035-5768-842b-4d7e26a9e002", 00:25:01.178 "is_configured": true, 00:25:01.178 "data_offset": 2048, 00:25:01.178 "data_size": 63488 00:25:01.178 }, 00:25:01.178 { 00:25:01.178 "name": "BaseBdev4", 00:25:01.178 "uuid": "77f631c0-6c83-5b7d-8847-c7e32aaf8f5e", 00:25:01.178 "is_configured": true, 00:25:01.178 "data_offset": 2048, 00:25:01.178 "data_size": 63488 00:25:01.178 } 00:25:01.178 ] 00:25:01.178 }' 00:25:01.178 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:01.178 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:01.436 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:01.436 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:01.436 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:01.436 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:01.436 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:01.436 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:01.436 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.436 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:01.436 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.436 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.436 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:01.436 "name": "raid_bdev1", 00:25:01.436 "uuid": "61e0c3f9-9ddf-492a-b732-bb8bd30ea324", 00:25:01.436 "strip_size_kb": 0, 00:25:01.436 "state": "online", 00:25:01.436 "raid_level": "raid1", 00:25:01.436 "superblock": true, 00:25:01.436 "num_base_bdevs": 4, 00:25:01.436 "num_base_bdevs_discovered": 2, 00:25:01.436 "num_base_bdevs_operational": 2, 00:25:01.436 "base_bdevs_list": [ 00:25:01.436 { 00:25:01.436 "name": null, 00:25:01.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.436 "is_configured": false, 00:25:01.436 "data_offset": 0, 00:25:01.436 "data_size": 63488 00:25:01.436 }, 00:25:01.436 { 00:25:01.436 "name": null, 00:25:01.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.436 "is_configured": false, 00:25:01.436 "data_offset": 2048, 00:25:01.436 "data_size": 63488 00:25:01.436 }, 00:25:01.436 { 00:25:01.436 "name": "BaseBdev3", 00:25:01.436 "uuid": "c598ba26-9035-5768-842b-4d7e26a9e002", 00:25:01.436 "is_configured": true, 00:25:01.436 "data_offset": 2048, 00:25:01.436 "data_size": 63488 00:25:01.436 }, 00:25:01.436 { 00:25:01.436 "name": "BaseBdev4", 00:25:01.436 "uuid": "77f631c0-6c83-5b7d-8847-c7e32aaf8f5e", 00:25:01.436 "is_configured": true, 00:25:01.436 "data_offset": 2048, 00:25:01.436 "data_size": 63488 00:25:01.436 } 00:25:01.436 ] 00:25:01.436 }' 00:25:01.436 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:01.436 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:01.436 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:01.436 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:01.436 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76760 00:25:01.436 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 76760 ']' 00:25:01.436 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 76760 00:25:01.436 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:25:01.436 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:01.436 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76760 00:25:01.436 killing process with pid 76760 00:25:01.436 Received shutdown signal, test time was about 15.724179 seconds 00:25:01.436 00:25:01.436 Latency(us) 00:25:01.436 [2024-12-05T12:56:44.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.436 [2024-12-05T12:56:44.023Z] =================================================================================================================== 00:25:01.436 [2024-12-05T12:56:44.023Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:01.436 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:01.436 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:01.436 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76760' 00:25:01.436 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 76760 00:25:01.436 [2024-12-05 12:56:43.951071] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:01.436 12:56:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 76760 00:25:01.436 [2024-12-05 12:56:43.951167] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:01.436 [2024-12-05 12:56:43.951225] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:01.436 [2024-12-05 12:56:43.951233] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:25:01.695 [2024-12-05 12:56:44.155253] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:02.260 12:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:25:02.260 00:25:02.260 real 0m18.142s 00:25:02.260 user 0m23.093s 00:25:02.260 sys 0m1.730s 00:25:02.260 ************************************ 00:25:02.260 END TEST raid_rebuild_test_sb_io 00:25:02.260 ************************************ 00:25:02.260 12:56:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:02.260 12:56:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:02.260 12:56:44 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:25:02.260 12:56:44 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:25:02.260 12:56:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:02.260 12:56:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:02.260 12:56:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:02.260 ************************************ 00:25:02.260 START TEST raid5f_state_function_test 00:25:02.260 ************************************ 00:25:02.260 12:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:25:02.260 12:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:25:02.260 12:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:25:02.260 12:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:25:02.260 12:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:25:02.260 12:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:25:02.260 12:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:02.260 12:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:25:02.260 12:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:02.260 12:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:02.260 12:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:25:02.260 12:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:02.260 12:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:02.260 12:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:25:02.260 12:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:02.260 12:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:02.260 12:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:25:02.260 12:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:25:02.260 12:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:25:02.260 12:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:25:02.260 12:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:25:02.260 12:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:25:02.260 12:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:25:02.260 12:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:25:02.260 12:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:25:02.260 12:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:25:02.260 12:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:25:02.260 12:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=77450 00:25:02.260 Process raid pid: 77450 00:25:02.260 12:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77450' 00:25:02.260 12:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 77450 00:25:02.260 12:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 77450 ']' 00:25:02.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:02.260 12:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:02.260 12:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:02.260 12:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:02.260 12:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:02.260 12:56:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:02.260 12:56:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:02.518 [2024-12-05 12:56:44.863644] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:25:02.518 [2024-12-05 12:56:44.863756] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:02.518 [2024-12-05 12:56:45.016360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.774 [2024-12-05 12:56:45.117593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.774 [2024-12-05 12:56:45.255160] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:02.774 [2024-12-05 12:56:45.255196] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:03.368 12:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:03.368 12:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:25:03.368 12:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:03.368 12:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.368 12:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.368 [2024-12-05 12:56:45.710576] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:03.368 [2024-12-05 12:56:45.710631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:03.368 [2024-12-05 12:56:45.710641] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:03.368 [2024-12-05 12:56:45.710651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:03.368 [2024-12-05 12:56:45.710657] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:03.368 [2024-12-05 12:56:45.710665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:03.368 12:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.368 12:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:03.368 12:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:03.368 12:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:03.368 12:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:03.368 12:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:03.368 12:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:03.368 12:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:03.368 12:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:03.368 12:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:03.368 12:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:03.368 12:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:03.368 12:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:03.368 12:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.368 12:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.368 12:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.368 12:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:03.368 "name": "Existed_Raid", 00:25:03.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.368 "strip_size_kb": 64, 00:25:03.368 "state": "configuring", 00:25:03.368 "raid_level": "raid5f", 00:25:03.368 "superblock": false, 00:25:03.368 "num_base_bdevs": 3, 00:25:03.368 "num_base_bdevs_discovered": 0, 00:25:03.368 "num_base_bdevs_operational": 3, 00:25:03.368 "base_bdevs_list": [ 00:25:03.368 { 00:25:03.368 "name": "BaseBdev1", 00:25:03.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.368 "is_configured": false, 00:25:03.368 "data_offset": 0, 00:25:03.368 "data_size": 0 00:25:03.368 }, 00:25:03.368 { 00:25:03.368 "name": "BaseBdev2", 00:25:03.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.368 "is_configured": false, 00:25:03.368 "data_offset": 0, 00:25:03.368 "data_size": 0 00:25:03.368 }, 00:25:03.368 { 00:25:03.368 "name": "BaseBdev3", 00:25:03.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.368 "is_configured": false, 00:25:03.368 "data_offset": 0, 00:25:03.368 "data_size": 0 00:25:03.368 } 00:25:03.369 ] 00:25:03.369 }' 00:25:03.369 12:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:03.369 12:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.626 12:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:03.626 12:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.626 12:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.626 [2024-12-05 12:56:45.994580] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:03.626 [2024-12-05 12:56:45.994610] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:03.626 12:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.626 12:56:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:03.626 12:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.626 12:56:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.626 [2024-12-05 12:56:46.002588] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:03.626 [2024-12-05 12:56:46.002626] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:03.626 [2024-12-05 12:56:46.002634] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:03.626 [2024-12-05 12:56:46.002643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:03.626 [2024-12-05 12:56:46.002649] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:03.626 [2024-12-05 12:56:46.002657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:03.626 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.626 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:03.626 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.626 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.626 [2024-12-05 12:56:46.034950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:03.626 BaseBdev1 00:25:03.626 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.626 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:25:03.626 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:25:03.626 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:03.626 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:03.626 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:03.626 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:03.626 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:03.626 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.626 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.626 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.626 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:03.626 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.626 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.626 [ 00:25:03.626 { 00:25:03.626 "name": "BaseBdev1", 00:25:03.626 "aliases": [ 00:25:03.626 "f2898e3e-612a-4487-a7f3-ec6e244a1e91" 00:25:03.626 ], 00:25:03.626 "product_name": "Malloc disk", 00:25:03.626 "block_size": 512, 00:25:03.626 "num_blocks": 65536, 00:25:03.626 "uuid": "f2898e3e-612a-4487-a7f3-ec6e244a1e91", 00:25:03.626 "assigned_rate_limits": { 00:25:03.626 "rw_ios_per_sec": 0, 00:25:03.626 "rw_mbytes_per_sec": 0, 00:25:03.626 "r_mbytes_per_sec": 0, 00:25:03.626 "w_mbytes_per_sec": 0 00:25:03.626 }, 00:25:03.626 "claimed": true, 00:25:03.626 "claim_type": "exclusive_write", 00:25:03.626 "zoned": false, 00:25:03.626 "supported_io_types": { 00:25:03.626 "read": true, 00:25:03.626 "write": true, 00:25:03.626 "unmap": true, 00:25:03.626 "flush": true, 00:25:03.626 "reset": true, 00:25:03.626 "nvme_admin": false, 00:25:03.626 "nvme_io": false, 00:25:03.626 "nvme_io_md": false, 00:25:03.626 "write_zeroes": true, 00:25:03.626 "zcopy": true, 00:25:03.626 "get_zone_info": false, 00:25:03.626 "zone_management": false, 00:25:03.626 "zone_append": false, 00:25:03.626 "compare": false, 00:25:03.626 "compare_and_write": false, 00:25:03.626 "abort": true, 00:25:03.626 "seek_hole": false, 00:25:03.626 "seek_data": false, 00:25:03.626 "copy": true, 00:25:03.626 "nvme_iov_md": false 00:25:03.626 }, 00:25:03.626 "memory_domains": [ 00:25:03.626 { 00:25:03.626 "dma_device_id": "system", 00:25:03.626 "dma_device_type": 1 00:25:03.626 }, 00:25:03.626 { 00:25:03.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:03.626 "dma_device_type": 2 00:25:03.626 } 00:25:03.626 ], 00:25:03.626 "driver_specific": {} 00:25:03.626 } 00:25:03.626 ] 00:25:03.626 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.626 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:03.626 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:03.626 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:03.626 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:03.626 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:03.626 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:03.626 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:03.626 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:03.626 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:03.626 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:03.626 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:03.626 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:03.626 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:03.626 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.626 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.626 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.626 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:03.626 "name": "Existed_Raid", 00:25:03.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.626 "strip_size_kb": 64, 00:25:03.626 "state": "configuring", 00:25:03.626 "raid_level": "raid5f", 00:25:03.626 "superblock": false, 00:25:03.626 "num_base_bdevs": 3, 00:25:03.626 "num_base_bdevs_discovered": 1, 00:25:03.626 "num_base_bdevs_operational": 3, 00:25:03.626 "base_bdevs_list": [ 00:25:03.626 { 00:25:03.626 "name": "BaseBdev1", 00:25:03.626 "uuid": "f2898e3e-612a-4487-a7f3-ec6e244a1e91", 00:25:03.626 "is_configured": true, 00:25:03.626 "data_offset": 0, 00:25:03.626 "data_size": 65536 00:25:03.626 }, 00:25:03.626 { 00:25:03.626 "name": "BaseBdev2", 00:25:03.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.626 "is_configured": false, 00:25:03.626 "data_offset": 0, 00:25:03.626 "data_size": 0 00:25:03.626 }, 00:25:03.626 { 00:25:03.626 "name": "BaseBdev3", 00:25:03.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.626 "is_configured": false, 00:25:03.626 "data_offset": 0, 00:25:03.626 "data_size": 0 00:25:03.626 } 00:25:03.626 ] 00:25:03.626 }' 00:25:03.626 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:03.626 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.907 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:03.907 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.907 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.907 [2024-12-05 12:56:46.367078] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:03.907 [2024-12-05 12:56:46.367237] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:25:03.907 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.907 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:03.907 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.907 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.907 [2024-12-05 12:56:46.375113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:03.907 [2024-12-05 12:56:46.376981] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:03.907 [2024-12-05 12:56:46.377021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:03.907 [2024-12-05 12:56:46.377030] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:03.907 [2024-12-05 12:56:46.377039] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:03.907 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.907 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:25:03.907 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:03.907 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:03.907 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:03.907 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:03.907 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:03.907 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:03.907 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:03.907 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:03.907 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:03.907 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:03.907 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:03.907 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:03.907 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:03.907 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.907 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.907 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.907 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:03.907 "name": "Existed_Raid", 00:25:03.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.907 "strip_size_kb": 64, 00:25:03.907 "state": "configuring", 00:25:03.907 "raid_level": "raid5f", 00:25:03.907 "superblock": false, 00:25:03.907 "num_base_bdevs": 3, 00:25:03.907 "num_base_bdevs_discovered": 1, 00:25:03.907 "num_base_bdevs_operational": 3, 00:25:03.907 "base_bdevs_list": [ 00:25:03.907 { 00:25:03.907 "name": "BaseBdev1", 00:25:03.907 "uuid": "f2898e3e-612a-4487-a7f3-ec6e244a1e91", 00:25:03.907 "is_configured": true, 00:25:03.907 "data_offset": 0, 00:25:03.907 "data_size": 65536 00:25:03.907 }, 00:25:03.907 { 00:25:03.907 "name": "BaseBdev2", 00:25:03.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.907 "is_configured": false, 00:25:03.907 "data_offset": 0, 00:25:03.907 "data_size": 0 00:25:03.907 }, 00:25:03.907 { 00:25:03.907 "name": "BaseBdev3", 00:25:03.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.907 "is_configured": false, 00:25:03.907 "data_offset": 0, 00:25:03.907 "data_size": 0 00:25:03.907 } 00:25:03.907 ] 00:25:03.907 }' 00:25:03.907 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:03.907 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.165 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:04.165 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.165 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.165 [2024-12-05 12:56:46.721828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:04.165 BaseBdev2 00:25:04.165 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.165 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:25:04.165 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:25:04.165 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:04.165 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:04.165 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:04.165 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:04.165 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:04.166 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.166 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.166 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.166 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:04.166 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.166 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.166 [ 00:25:04.166 { 00:25:04.166 "name": "BaseBdev2", 00:25:04.166 "aliases": [ 00:25:04.166 "f32aba62-b6a4-4730-b74b-bcea6baeb8b4" 00:25:04.166 ], 00:25:04.166 "product_name": "Malloc disk", 00:25:04.166 "block_size": 512, 00:25:04.166 "num_blocks": 65536, 00:25:04.166 "uuid": "f32aba62-b6a4-4730-b74b-bcea6baeb8b4", 00:25:04.166 "assigned_rate_limits": { 00:25:04.166 "rw_ios_per_sec": 0, 00:25:04.166 "rw_mbytes_per_sec": 0, 00:25:04.166 "r_mbytes_per_sec": 0, 00:25:04.166 "w_mbytes_per_sec": 0 00:25:04.166 }, 00:25:04.166 "claimed": true, 00:25:04.166 "claim_type": "exclusive_write", 00:25:04.166 "zoned": false, 00:25:04.166 "supported_io_types": { 00:25:04.166 "read": true, 00:25:04.166 "write": true, 00:25:04.166 "unmap": true, 00:25:04.166 "flush": true, 00:25:04.166 "reset": true, 00:25:04.166 "nvme_admin": false, 00:25:04.166 "nvme_io": false, 00:25:04.166 "nvme_io_md": false, 00:25:04.166 "write_zeroes": true, 00:25:04.166 "zcopy": true, 00:25:04.166 "get_zone_info": false, 00:25:04.166 "zone_management": false, 00:25:04.166 "zone_append": false, 00:25:04.166 "compare": false, 00:25:04.166 "compare_and_write": false, 00:25:04.166 "abort": true, 00:25:04.166 "seek_hole": false, 00:25:04.166 "seek_data": false, 00:25:04.166 "copy": true, 00:25:04.166 "nvme_iov_md": false 00:25:04.166 }, 00:25:04.166 "memory_domains": [ 00:25:04.166 { 00:25:04.166 "dma_device_id": "system", 00:25:04.166 "dma_device_type": 1 00:25:04.166 }, 00:25:04.166 { 00:25:04.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:04.166 "dma_device_type": 2 00:25:04.166 } 00:25:04.166 ], 00:25:04.166 "driver_specific": {} 00:25:04.166 } 00:25:04.166 ] 00:25:04.166 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.166 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:04.423 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:04.423 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:04.423 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:04.423 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:04.423 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:04.423 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:04.423 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:04.423 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:04.423 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:04.423 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:04.423 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:04.423 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:04.423 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:04.423 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:04.423 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.423 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.424 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.424 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:04.424 "name": "Existed_Raid", 00:25:04.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.424 "strip_size_kb": 64, 00:25:04.424 "state": "configuring", 00:25:04.424 "raid_level": "raid5f", 00:25:04.424 "superblock": false, 00:25:04.424 "num_base_bdevs": 3, 00:25:04.424 "num_base_bdevs_discovered": 2, 00:25:04.424 "num_base_bdevs_operational": 3, 00:25:04.424 "base_bdevs_list": [ 00:25:04.424 { 00:25:04.424 "name": "BaseBdev1", 00:25:04.424 "uuid": "f2898e3e-612a-4487-a7f3-ec6e244a1e91", 00:25:04.424 "is_configured": true, 00:25:04.424 "data_offset": 0, 00:25:04.424 "data_size": 65536 00:25:04.424 }, 00:25:04.424 { 00:25:04.424 "name": "BaseBdev2", 00:25:04.424 "uuid": "f32aba62-b6a4-4730-b74b-bcea6baeb8b4", 00:25:04.424 "is_configured": true, 00:25:04.424 "data_offset": 0, 00:25:04.424 "data_size": 65536 00:25:04.424 }, 00:25:04.424 { 00:25:04.424 "name": "BaseBdev3", 00:25:04.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.424 "is_configured": false, 00:25:04.424 "data_offset": 0, 00:25:04.424 "data_size": 0 00:25:04.424 } 00:25:04.424 ] 00:25:04.424 }' 00:25:04.424 12:56:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:04.424 12:56:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.681 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:04.681 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.681 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.681 [2024-12-05 12:56:47.088790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:04.681 [2024-12-05 12:56:47.088834] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:04.681 [2024-12-05 12:56:47.088848] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:25:04.681 [2024-12-05 12:56:47.089094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:04.681 [2024-12-05 12:56:47.092889] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:04.681 [2024-12-05 12:56:47.092908] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:25:04.681 [2024-12-05 12:56:47.093153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:04.681 BaseBdev3 00:25:04.681 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.681 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:25:04.681 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:25:04.681 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:04.681 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:04.681 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:04.681 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:04.681 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:04.681 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.681 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.681 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.681 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:04.681 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.681 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.681 [ 00:25:04.681 { 00:25:04.681 "name": "BaseBdev3", 00:25:04.681 "aliases": [ 00:25:04.681 "fff55791-f47f-426b-9100-94a7cec86118" 00:25:04.681 ], 00:25:04.681 "product_name": "Malloc disk", 00:25:04.681 "block_size": 512, 00:25:04.681 "num_blocks": 65536, 00:25:04.681 "uuid": "fff55791-f47f-426b-9100-94a7cec86118", 00:25:04.681 "assigned_rate_limits": { 00:25:04.681 "rw_ios_per_sec": 0, 00:25:04.681 "rw_mbytes_per_sec": 0, 00:25:04.681 "r_mbytes_per_sec": 0, 00:25:04.681 "w_mbytes_per_sec": 0 00:25:04.681 }, 00:25:04.681 "claimed": true, 00:25:04.681 "claim_type": "exclusive_write", 00:25:04.681 "zoned": false, 00:25:04.681 "supported_io_types": { 00:25:04.681 "read": true, 00:25:04.681 "write": true, 00:25:04.681 "unmap": true, 00:25:04.681 "flush": true, 00:25:04.681 "reset": true, 00:25:04.681 "nvme_admin": false, 00:25:04.681 "nvme_io": false, 00:25:04.681 "nvme_io_md": false, 00:25:04.681 "write_zeroes": true, 00:25:04.682 "zcopy": true, 00:25:04.682 "get_zone_info": false, 00:25:04.682 "zone_management": false, 00:25:04.682 "zone_append": false, 00:25:04.682 "compare": false, 00:25:04.682 "compare_and_write": false, 00:25:04.682 "abort": true, 00:25:04.682 "seek_hole": false, 00:25:04.682 "seek_data": false, 00:25:04.682 "copy": true, 00:25:04.682 "nvme_iov_md": false 00:25:04.682 }, 00:25:04.682 "memory_domains": [ 00:25:04.682 { 00:25:04.682 "dma_device_id": "system", 00:25:04.682 "dma_device_type": 1 00:25:04.682 }, 00:25:04.682 { 00:25:04.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:04.682 "dma_device_type": 2 00:25:04.682 } 00:25:04.682 ], 00:25:04.682 "driver_specific": {} 00:25:04.682 } 00:25:04.682 ] 00:25:04.682 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.682 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:04.682 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:04.682 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:04.682 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:25:04.682 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:04.682 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:04.682 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:04.682 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:04.682 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:04.682 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:04.682 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:04.682 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:04.682 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:04.682 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:04.682 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:04.682 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.682 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.682 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.682 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:04.682 "name": "Existed_Raid", 00:25:04.682 "uuid": "36296ed0-6890-425b-8b7f-25e491c6f098", 00:25:04.682 "strip_size_kb": 64, 00:25:04.682 "state": "online", 00:25:04.682 "raid_level": "raid5f", 00:25:04.682 "superblock": false, 00:25:04.682 "num_base_bdevs": 3, 00:25:04.682 "num_base_bdevs_discovered": 3, 00:25:04.682 "num_base_bdevs_operational": 3, 00:25:04.682 "base_bdevs_list": [ 00:25:04.682 { 00:25:04.682 "name": "BaseBdev1", 00:25:04.682 "uuid": "f2898e3e-612a-4487-a7f3-ec6e244a1e91", 00:25:04.682 "is_configured": true, 00:25:04.682 "data_offset": 0, 00:25:04.682 "data_size": 65536 00:25:04.682 }, 00:25:04.682 { 00:25:04.682 "name": "BaseBdev2", 00:25:04.682 "uuid": "f32aba62-b6a4-4730-b74b-bcea6baeb8b4", 00:25:04.682 "is_configured": true, 00:25:04.682 "data_offset": 0, 00:25:04.682 "data_size": 65536 00:25:04.682 }, 00:25:04.682 { 00:25:04.682 "name": "BaseBdev3", 00:25:04.682 "uuid": "fff55791-f47f-426b-9100-94a7cec86118", 00:25:04.682 "is_configured": true, 00:25:04.682 "data_offset": 0, 00:25:04.682 "data_size": 65536 00:25:04.682 } 00:25:04.682 ] 00:25:04.682 }' 00:25:04.682 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:04.682 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.939 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:25:04.939 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:04.939 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:04.939 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:04.939 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:04.939 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:04.939 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:04.939 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.939 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.939 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:04.939 [2024-12-05 12:56:47.425779] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:04.939 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.939 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:04.939 "name": "Existed_Raid", 00:25:04.939 "aliases": [ 00:25:04.939 "36296ed0-6890-425b-8b7f-25e491c6f098" 00:25:04.939 ], 00:25:04.939 "product_name": "Raid Volume", 00:25:04.939 "block_size": 512, 00:25:04.939 "num_blocks": 131072, 00:25:04.939 "uuid": "36296ed0-6890-425b-8b7f-25e491c6f098", 00:25:04.939 "assigned_rate_limits": { 00:25:04.939 "rw_ios_per_sec": 0, 00:25:04.939 "rw_mbytes_per_sec": 0, 00:25:04.939 "r_mbytes_per_sec": 0, 00:25:04.939 "w_mbytes_per_sec": 0 00:25:04.939 }, 00:25:04.939 "claimed": false, 00:25:04.939 "zoned": false, 00:25:04.939 "supported_io_types": { 00:25:04.939 "read": true, 00:25:04.939 "write": true, 00:25:04.939 "unmap": false, 00:25:04.939 "flush": false, 00:25:04.939 "reset": true, 00:25:04.939 "nvme_admin": false, 00:25:04.940 "nvme_io": false, 00:25:04.940 "nvme_io_md": false, 00:25:04.940 "write_zeroes": true, 00:25:04.940 "zcopy": false, 00:25:04.940 "get_zone_info": false, 00:25:04.940 "zone_management": false, 00:25:04.940 "zone_append": false, 00:25:04.940 "compare": false, 00:25:04.940 "compare_and_write": false, 00:25:04.940 "abort": false, 00:25:04.940 "seek_hole": false, 00:25:04.940 "seek_data": false, 00:25:04.940 "copy": false, 00:25:04.940 "nvme_iov_md": false 00:25:04.940 }, 00:25:04.940 "driver_specific": { 00:25:04.940 "raid": { 00:25:04.940 "uuid": "36296ed0-6890-425b-8b7f-25e491c6f098", 00:25:04.940 "strip_size_kb": 64, 00:25:04.940 "state": "online", 00:25:04.940 "raid_level": "raid5f", 00:25:04.940 "superblock": false, 00:25:04.940 "num_base_bdevs": 3, 00:25:04.940 "num_base_bdevs_discovered": 3, 00:25:04.940 "num_base_bdevs_operational": 3, 00:25:04.940 "base_bdevs_list": [ 00:25:04.940 { 00:25:04.940 "name": "BaseBdev1", 00:25:04.940 "uuid": "f2898e3e-612a-4487-a7f3-ec6e244a1e91", 00:25:04.940 "is_configured": true, 00:25:04.940 "data_offset": 0, 00:25:04.940 "data_size": 65536 00:25:04.940 }, 00:25:04.940 { 00:25:04.940 "name": "BaseBdev2", 00:25:04.940 "uuid": "f32aba62-b6a4-4730-b74b-bcea6baeb8b4", 00:25:04.940 "is_configured": true, 00:25:04.940 "data_offset": 0, 00:25:04.940 "data_size": 65536 00:25:04.940 }, 00:25:04.940 { 00:25:04.940 "name": "BaseBdev3", 00:25:04.940 "uuid": "fff55791-f47f-426b-9100-94a7cec86118", 00:25:04.940 "is_configured": true, 00:25:04.940 "data_offset": 0, 00:25:04.940 "data_size": 65536 00:25:04.940 } 00:25:04.940 ] 00:25:04.940 } 00:25:04.940 } 00:25:04.940 }' 00:25:04.940 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:04.940 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:25:04.940 BaseBdev2 00:25:04.940 BaseBdev3' 00:25:04.940 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:04.940 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:04.940 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:04.940 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:25:04.940 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:04.940 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.940 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.200 [2024-12-05 12:56:47.609629] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.200 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:05.200 "name": "Existed_Raid", 00:25:05.200 "uuid": "36296ed0-6890-425b-8b7f-25e491c6f098", 00:25:05.200 "strip_size_kb": 64, 00:25:05.200 "state": "online", 00:25:05.200 "raid_level": "raid5f", 00:25:05.200 "superblock": false, 00:25:05.200 "num_base_bdevs": 3, 00:25:05.200 "num_base_bdevs_discovered": 2, 00:25:05.200 "num_base_bdevs_operational": 2, 00:25:05.200 "base_bdevs_list": [ 00:25:05.200 { 00:25:05.200 "name": null, 00:25:05.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.200 "is_configured": false, 00:25:05.200 "data_offset": 0, 00:25:05.200 "data_size": 65536 00:25:05.200 }, 00:25:05.200 { 00:25:05.200 "name": "BaseBdev2", 00:25:05.200 "uuid": "f32aba62-b6a4-4730-b74b-bcea6baeb8b4", 00:25:05.200 "is_configured": true, 00:25:05.200 "data_offset": 0, 00:25:05.200 "data_size": 65536 00:25:05.200 }, 00:25:05.200 { 00:25:05.200 "name": "BaseBdev3", 00:25:05.200 "uuid": "fff55791-f47f-426b-9100-94a7cec86118", 00:25:05.200 "is_configured": true, 00:25:05.200 "data_offset": 0, 00:25:05.200 "data_size": 65536 00:25:05.200 } 00:25:05.200 ] 00:25:05.200 }' 00:25:05.201 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:05.201 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.460 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:25:05.461 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:05.461 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:05.461 12:56:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:05.461 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.461 12:56:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.461 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.461 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:05.461 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:05.461 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:25:05.461 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.461 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.461 [2024-12-05 12:56:48.020522] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:05.461 [2024-12-05 12:56:48.020727] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:05.719 [2024-12-05 12:56:48.080219] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:05.719 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.719 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:05.719 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:05.719 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:05.719 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.719 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.719 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:05.719 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.719 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:05.719 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:05.719 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:25:05.719 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.719 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.719 [2024-12-05 12:56:48.116291] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:05.719 [2024-12-05 12:56:48.116335] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:25:05.719 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.719 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:05.719 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:05.719 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:05.719 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:25:05.719 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.719 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.719 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.719 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:25:05.719 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:25:05.719 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:25:05.719 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:25:05.719 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:05.720 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:05.720 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.720 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.720 BaseBdev2 00:25:05.720 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.720 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:25:05.720 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:25:05.720 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:05.720 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:05.720 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:05.720 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:05.720 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:05.720 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.720 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.720 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.720 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:05.720 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.720 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.720 [ 00:25:05.720 { 00:25:05.720 "name": "BaseBdev2", 00:25:05.720 "aliases": [ 00:25:05.720 "7f3b9876-574c-45bb-91dd-9237098abe99" 00:25:05.720 ], 00:25:05.720 "product_name": "Malloc disk", 00:25:05.720 "block_size": 512, 00:25:05.720 "num_blocks": 65536, 00:25:05.720 "uuid": "7f3b9876-574c-45bb-91dd-9237098abe99", 00:25:05.720 "assigned_rate_limits": { 00:25:05.720 "rw_ios_per_sec": 0, 00:25:05.720 "rw_mbytes_per_sec": 0, 00:25:05.720 "r_mbytes_per_sec": 0, 00:25:05.720 "w_mbytes_per_sec": 0 00:25:05.720 }, 00:25:05.720 "claimed": false, 00:25:05.720 "zoned": false, 00:25:05.720 "supported_io_types": { 00:25:05.720 "read": true, 00:25:05.720 "write": true, 00:25:05.720 "unmap": true, 00:25:05.720 "flush": true, 00:25:05.720 "reset": true, 00:25:05.720 "nvme_admin": false, 00:25:05.720 "nvme_io": false, 00:25:05.720 "nvme_io_md": false, 00:25:05.720 "write_zeroes": true, 00:25:05.720 "zcopy": true, 00:25:05.720 "get_zone_info": false, 00:25:05.720 "zone_management": false, 00:25:05.720 "zone_append": false, 00:25:05.720 "compare": false, 00:25:05.720 "compare_and_write": false, 00:25:05.720 "abort": true, 00:25:05.720 "seek_hole": false, 00:25:05.720 "seek_data": false, 00:25:05.720 "copy": true, 00:25:05.720 "nvme_iov_md": false 00:25:05.720 }, 00:25:05.720 "memory_domains": [ 00:25:05.720 { 00:25:05.720 "dma_device_id": "system", 00:25:05.720 "dma_device_type": 1 00:25:05.720 }, 00:25:05.720 { 00:25:05.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:05.720 "dma_device_type": 2 00:25:05.720 } 00:25:05.720 ], 00:25:05.720 "driver_specific": {} 00:25:05.720 } 00:25:05.720 ] 00:25:05.720 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.720 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:05.720 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:05.720 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:05.720 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:05.720 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.720 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.720 BaseBdev3 00:25:05.720 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.720 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:25:05.720 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:25:05.720 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:05.720 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:05.720 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:05.720 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:05.720 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:05.720 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.720 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.978 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.978 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:05.978 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.978 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.978 [ 00:25:05.978 { 00:25:05.978 "name": "BaseBdev3", 00:25:05.978 "aliases": [ 00:25:05.978 "0bb88c46-5e22-4fe0-840a-95b0dafc086d" 00:25:05.978 ], 00:25:05.978 "product_name": "Malloc disk", 00:25:05.978 "block_size": 512, 00:25:05.978 "num_blocks": 65536, 00:25:05.978 "uuid": "0bb88c46-5e22-4fe0-840a-95b0dafc086d", 00:25:05.978 "assigned_rate_limits": { 00:25:05.978 "rw_ios_per_sec": 0, 00:25:05.978 "rw_mbytes_per_sec": 0, 00:25:05.978 "r_mbytes_per_sec": 0, 00:25:05.978 "w_mbytes_per_sec": 0 00:25:05.978 }, 00:25:05.978 "claimed": false, 00:25:05.978 "zoned": false, 00:25:05.978 "supported_io_types": { 00:25:05.978 "read": true, 00:25:05.978 "write": true, 00:25:05.978 "unmap": true, 00:25:05.978 "flush": true, 00:25:05.978 "reset": true, 00:25:05.978 "nvme_admin": false, 00:25:05.978 "nvme_io": false, 00:25:05.978 "nvme_io_md": false, 00:25:05.978 "write_zeroes": true, 00:25:05.978 "zcopy": true, 00:25:05.978 "get_zone_info": false, 00:25:05.978 "zone_management": false, 00:25:05.978 "zone_append": false, 00:25:05.978 "compare": false, 00:25:05.978 "compare_and_write": false, 00:25:05.978 "abort": true, 00:25:05.978 "seek_hole": false, 00:25:05.978 "seek_data": false, 00:25:05.978 "copy": true, 00:25:05.978 "nvme_iov_md": false 00:25:05.978 }, 00:25:05.978 "memory_domains": [ 00:25:05.978 { 00:25:05.978 "dma_device_id": "system", 00:25:05.978 "dma_device_type": 1 00:25:05.978 }, 00:25:05.978 { 00:25:05.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:05.978 "dma_device_type": 2 00:25:05.978 } 00:25:05.978 ], 00:25:05.978 "driver_specific": {} 00:25:05.978 } 00:25:05.978 ] 00:25:05.978 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.978 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:05.978 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:05.978 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:05.978 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:05.978 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.978 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.978 [2024-12-05 12:56:48.328091] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:05.978 [2024-12-05 12:56:48.328252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:05.978 [2024-12-05 12:56:48.328336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:05.978 [2024-12-05 12:56:48.330197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:05.978 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.978 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:05.978 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:05.978 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:05.978 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:05.978 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:05.978 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:05.978 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:05.978 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:05.978 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:05.978 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:05.978 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:05.978 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:05.978 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.978 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.978 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.978 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:05.978 "name": "Existed_Raid", 00:25:05.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.978 "strip_size_kb": 64, 00:25:05.978 "state": "configuring", 00:25:05.978 "raid_level": "raid5f", 00:25:05.978 "superblock": false, 00:25:05.978 "num_base_bdevs": 3, 00:25:05.978 "num_base_bdevs_discovered": 2, 00:25:05.978 "num_base_bdevs_operational": 3, 00:25:05.978 "base_bdevs_list": [ 00:25:05.978 { 00:25:05.978 "name": "BaseBdev1", 00:25:05.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.978 "is_configured": false, 00:25:05.978 "data_offset": 0, 00:25:05.978 "data_size": 0 00:25:05.978 }, 00:25:05.978 { 00:25:05.978 "name": "BaseBdev2", 00:25:05.978 "uuid": "7f3b9876-574c-45bb-91dd-9237098abe99", 00:25:05.978 "is_configured": true, 00:25:05.978 "data_offset": 0, 00:25:05.978 "data_size": 65536 00:25:05.978 }, 00:25:05.978 { 00:25:05.978 "name": "BaseBdev3", 00:25:05.978 "uuid": "0bb88c46-5e22-4fe0-840a-95b0dafc086d", 00:25:05.978 "is_configured": true, 00:25:05.978 "data_offset": 0, 00:25:05.978 "data_size": 65536 00:25:05.978 } 00:25:05.978 ] 00:25:05.978 }' 00:25:05.978 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:05.978 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:06.235 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:25:06.235 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.235 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:06.235 [2024-12-05 12:56:48.640145] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:06.235 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.235 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:06.235 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:06.235 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:06.235 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:06.235 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:06.235 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:06.235 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:06.235 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:06.235 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:06.235 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:06.235 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:06.235 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.235 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:06.235 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:06.236 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.236 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:06.236 "name": "Existed_Raid", 00:25:06.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.236 "strip_size_kb": 64, 00:25:06.236 "state": "configuring", 00:25:06.236 "raid_level": "raid5f", 00:25:06.236 "superblock": false, 00:25:06.236 "num_base_bdevs": 3, 00:25:06.236 "num_base_bdevs_discovered": 1, 00:25:06.236 "num_base_bdevs_operational": 3, 00:25:06.236 "base_bdevs_list": [ 00:25:06.236 { 00:25:06.236 "name": "BaseBdev1", 00:25:06.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.236 "is_configured": false, 00:25:06.236 "data_offset": 0, 00:25:06.236 "data_size": 0 00:25:06.236 }, 00:25:06.236 { 00:25:06.236 "name": null, 00:25:06.236 "uuid": "7f3b9876-574c-45bb-91dd-9237098abe99", 00:25:06.236 "is_configured": false, 00:25:06.236 "data_offset": 0, 00:25:06.236 "data_size": 65536 00:25:06.236 }, 00:25:06.236 { 00:25:06.236 "name": "BaseBdev3", 00:25:06.236 "uuid": "0bb88c46-5e22-4fe0-840a-95b0dafc086d", 00:25:06.236 "is_configured": true, 00:25:06.236 "data_offset": 0, 00:25:06.236 "data_size": 65536 00:25:06.236 } 00:25:06.236 ] 00:25:06.236 }' 00:25:06.236 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:06.236 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:06.494 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:06.494 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.494 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:06.494 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:06.494 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.494 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:25:06.494 12:56:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:06.494 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.494 12:56:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:06.494 [2024-12-05 12:56:49.006656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:06.494 BaseBdev1 00:25:06.494 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.494 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:25:06.494 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:25:06.494 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:06.494 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:06.494 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:06.494 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:06.494 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:06.494 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.494 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:06.494 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.494 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:06.494 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.494 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:06.494 [ 00:25:06.494 { 00:25:06.494 "name": "BaseBdev1", 00:25:06.494 "aliases": [ 00:25:06.494 "e4015e69-acd4-4aae-a94b-d8bcf73bd977" 00:25:06.494 ], 00:25:06.494 "product_name": "Malloc disk", 00:25:06.494 "block_size": 512, 00:25:06.494 "num_blocks": 65536, 00:25:06.494 "uuid": "e4015e69-acd4-4aae-a94b-d8bcf73bd977", 00:25:06.494 "assigned_rate_limits": { 00:25:06.494 "rw_ios_per_sec": 0, 00:25:06.494 "rw_mbytes_per_sec": 0, 00:25:06.494 "r_mbytes_per_sec": 0, 00:25:06.494 "w_mbytes_per_sec": 0 00:25:06.494 }, 00:25:06.494 "claimed": true, 00:25:06.494 "claim_type": "exclusive_write", 00:25:06.494 "zoned": false, 00:25:06.494 "supported_io_types": { 00:25:06.494 "read": true, 00:25:06.494 "write": true, 00:25:06.494 "unmap": true, 00:25:06.494 "flush": true, 00:25:06.494 "reset": true, 00:25:06.494 "nvme_admin": false, 00:25:06.494 "nvme_io": false, 00:25:06.494 "nvme_io_md": false, 00:25:06.494 "write_zeroes": true, 00:25:06.494 "zcopy": true, 00:25:06.494 "get_zone_info": false, 00:25:06.494 "zone_management": false, 00:25:06.494 "zone_append": false, 00:25:06.494 "compare": false, 00:25:06.494 "compare_and_write": false, 00:25:06.494 "abort": true, 00:25:06.494 "seek_hole": false, 00:25:06.494 "seek_data": false, 00:25:06.494 "copy": true, 00:25:06.494 "nvme_iov_md": false 00:25:06.494 }, 00:25:06.494 "memory_domains": [ 00:25:06.494 { 00:25:06.494 "dma_device_id": "system", 00:25:06.494 "dma_device_type": 1 00:25:06.494 }, 00:25:06.494 { 00:25:06.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:06.494 "dma_device_type": 2 00:25:06.494 } 00:25:06.494 ], 00:25:06.494 "driver_specific": {} 00:25:06.494 } 00:25:06.494 ] 00:25:06.494 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.494 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:06.494 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:06.494 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:06.494 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:06.494 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:06.494 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:06.494 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:06.494 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:06.494 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:06.494 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:06.494 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:06.494 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:06.494 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.494 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:06.494 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:06.494 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.494 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:06.494 "name": "Existed_Raid", 00:25:06.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.494 "strip_size_kb": 64, 00:25:06.494 "state": "configuring", 00:25:06.494 "raid_level": "raid5f", 00:25:06.494 "superblock": false, 00:25:06.494 "num_base_bdevs": 3, 00:25:06.494 "num_base_bdevs_discovered": 2, 00:25:06.494 "num_base_bdevs_operational": 3, 00:25:06.494 "base_bdevs_list": [ 00:25:06.494 { 00:25:06.494 "name": "BaseBdev1", 00:25:06.494 "uuid": "e4015e69-acd4-4aae-a94b-d8bcf73bd977", 00:25:06.494 "is_configured": true, 00:25:06.494 "data_offset": 0, 00:25:06.494 "data_size": 65536 00:25:06.494 }, 00:25:06.494 { 00:25:06.494 "name": null, 00:25:06.494 "uuid": "7f3b9876-574c-45bb-91dd-9237098abe99", 00:25:06.494 "is_configured": false, 00:25:06.494 "data_offset": 0, 00:25:06.494 "data_size": 65536 00:25:06.494 }, 00:25:06.494 { 00:25:06.494 "name": "BaseBdev3", 00:25:06.494 "uuid": "0bb88c46-5e22-4fe0-840a-95b0dafc086d", 00:25:06.494 "is_configured": true, 00:25:06.494 "data_offset": 0, 00:25:06.494 "data_size": 65536 00:25:06.494 } 00:25:06.494 ] 00:25:06.494 }' 00:25:06.494 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:06.494 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:07.060 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:07.060 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:07.060 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.060 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:07.060 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.060 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:25:07.060 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:25:07.060 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.060 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:07.060 [2024-12-05 12:56:49.378777] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:07.060 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.060 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:07.060 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:07.060 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:07.060 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:07.060 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:07.060 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:07.060 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:07.060 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:07.060 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:07.060 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:07.060 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:07.060 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.060 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:07.060 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:07.060 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.060 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:07.060 "name": "Existed_Raid", 00:25:07.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.060 "strip_size_kb": 64, 00:25:07.060 "state": "configuring", 00:25:07.060 "raid_level": "raid5f", 00:25:07.060 "superblock": false, 00:25:07.060 "num_base_bdevs": 3, 00:25:07.060 "num_base_bdevs_discovered": 1, 00:25:07.060 "num_base_bdevs_operational": 3, 00:25:07.060 "base_bdevs_list": [ 00:25:07.060 { 00:25:07.060 "name": "BaseBdev1", 00:25:07.060 "uuid": "e4015e69-acd4-4aae-a94b-d8bcf73bd977", 00:25:07.060 "is_configured": true, 00:25:07.060 "data_offset": 0, 00:25:07.060 "data_size": 65536 00:25:07.060 }, 00:25:07.060 { 00:25:07.060 "name": null, 00:25:07.060 "uuid": "7f3b9876-574c-45bb-91dd-9237098abe99", 00:25:07.060 "is_configured": false, 00:25:07.060 "data_offset": 0, 00:25:07.060 "data_size": 65536 00:25:07.060 }, 00:25:07.060 { 00:25:07.060 "name": null, 00:25:07.060 "uuid": "0bb88c46-5e22-4fe0-840a-95b0dafc086d", 00:25:07.060 "is_configured": false, 00:25:07.060 "data_offset": 0, 00:25:07.060 "data_size": 65536 00:25:07.060 } 00:25:07.060 ] 00:25:07.060 }' 00:25:07.060 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:07.060 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:07.318 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:07.318 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.318 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:07.318 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:07.318 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.318 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:25:07.318 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:07.318 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.318 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:07.318 [2024-12-05 12:56:49.738871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:07.318 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.318 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:07.318 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:07.318 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:07.318 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:07.318 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:07.318 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:07.318 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:07.318 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:07.318 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:07.318 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:07.318 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:07.318 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:07.318 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.318 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:07.318 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.318 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:07.318 "name": "Existed_Raid", 00:25:07.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.318 "strip_size_kb": 64, 00:25:07.318 "state": "configuring", 00:25:07.318 "raid_level": "raid5f", 00:25:07.318 "superblock": false, 00:25:07.318 "num_base_bdevs": 3, 00:25:07.318 "num_base_bdevs_discovered": 2, 00:25:07.318 "num_base_bdevs_operational": 3, 00:25:07.318 "base_bdevs_list": [ 00:25:07.318 { 00:25:07.318 "name": "BaseBdev1", 00:25:07.318 "uuid": "e4015e69-acd4-4aae-a94b-d8bcf73bd977", 00:25:07.318 "is_configured": true, 00:25:07.318 "data_offset": 0, 00:25:07.318 "data_size": 65536 00:25:07.318 }, 00:25:07.318 { 00:25:07.318 "name": null, 00:25:07.318 "uuid": "7f3b9876-574c-45bb-91dd-9237098abe99", 00:25:07.318 "is_configured": false, 00:25:07.318 "data_offset": 0, 00:25:07.318 "data_size": 65536 00:25:07.318 }, 00:25:07.318 { 00:25:07.318 "name": "BaseBdev3", 00:25:07.318 "uuid": "0bb88c46-5e22-4fe0-840a-95b0dafc086d", 00:25:07.318 "is_configured": true, 00:25:07.318 "data_offset": 0, 00:25:07.318 "data_size": 65536 00:25:07.318 } 00:25:07.318 ] 00:25:07.318 }' 00:25:07.318 12:56:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:07.318 12:56:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:07.576 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:07.576 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.576 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:07.576 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:07.576 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.576 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:25:07.576 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:07.576 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.576 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:07.576 [2024-12-05 12:56:50.098949] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:07.576 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.576 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:07.576 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:07.576 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:07.576 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:07.576 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:07.576 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:07.576 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:07.576 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:07.576 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:07.576 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:07.576 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:07.576 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:07.576 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.576 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:07.834 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.834 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:07.834 "name": "Existed_Raid", 00:25:07.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.834 "strip_size_kb": 64, 00:25:07.834 "state": "configuring", 00:25:07.834 "raid_level": "raid5f", 00:25:07.834 "superblock": false, 00:25:07.834 "num_base_bdevs": 3, 00:25:07.834 "num_base_bdevs_discovered": 1, 00:25:07.834 "num_base_bdevs_operational": 3, 00:25:07.834 "base_bdevs_list": [ 00:25:07.834 { 00:25:07.834 "name": null, 00:25:07.834 "uuid": "e4015e69-acd4-4aae-a94b-d8bcf73bd977", 00:25:07.834 "is_configured": false, 00:25:07.834 "data_offset": 0, 00:25:07.834 "data_size": 65536 00:25:07.834 }, 00:25:07.834 { 00:25:07.834 "name": null, 00:25:07.834 "uuid": "7f3b9876-574c-45bb-91dd-9237098abe99", 00:25:07.834 "is_configured": false, 00:25:07.834 "data_offset": 0, 00:25:07.834 "data_size": 65536 00:25:07.834 }, 00:25:07.834 { 00:25:07.834 "name": "BaseBdev3", 00:25:07.834 "uuid": "0bb88c46-5e22-4fe0-840a-95b0dafc086d", 00:25:07.834 "is_configured": true, 00:25:07.834 "data_offset": 0, 00:25:07.834 "data_size": 65536 00:25:07.834 } 00:25:07.834 ] 00:25:07.834 }' 00:25:07.834 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:07.834 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:08.092 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:08.092 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.092 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:08.092 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:08.092 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.092 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:25:08.092 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:08.092 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.092 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:08.092 [2024-12-05 12:56:50.511001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:08.092 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.093 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:08.093 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:08.093 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:08.093 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:08.093 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:08.093 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:08.093 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:08.093 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:08.093 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:08.093 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:08.093 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:08.093 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:08.093 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.093 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:08.093 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.093 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:08.093 "name": "Existed_Raid", 00:25:08.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:08.093 "strip_size_kb": 64, 00:25:08.093 "state": "configuring", 00:25:08.093 "raid_level": "raid5f", 00:25:08.093 "superblock": false, 00:25:08.093 "num_base_bdevs": 3, 00:25:08.093 "num_base_bdevs_discovered": 2, 00:25:08.093 "num_base_bdevs_operational": 3, 00:25:08.093 "base_bdevs_list": [ 00:25:08.093 { 00:25:08.093 "name": null, 00:25:08.093 "uuid": "e4015e69-acd4-4aae-a94b-d8bcf73bd977", 00:25:08.093 "is_configured": false, 00:25:08.093 "data_offset": 0, 00:25:08.093 "data_size": 65536 00:25:08.093 }, 00:25:08.093 { 00:25:08.093 "name": "BaseBdev2", 00:25:08.093 "uuid": "7f3b9876-574c-45bb-91dd-9237098abe99", 00:25:08.093 "is_configured": true, 00:25:08.093 "data_offset": 0, 00:25:08.093 "data_size": 65536 00:25:08.093 }, 00:25:08.093 { 00:25:08.093 "name": "BaseBdev3", 00:25:08.093 "uuid": "0bb88c46-5e22-4fe0-840a-95b0dafc086d", 00:25:08.093 "is_configured": true, 00:25:08.093 "data_offset": 0, 00:25:08.093 "data_size": 65536 00:25:08.093 } 00:25:08.093 ] 00:25:08.093 }' 00:25:08.093 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:08.093 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:08.351 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:08.351 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:08.351 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.351 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:08.351 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.351 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:25:08.351 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:08.351 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:08.351 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.351 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:08.351 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.351 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e4015e69-acd4-4aae-a94b-d8bcf73bd977 00:25:08.351 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.351 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:08.351 [2024-12-05 12:56:50.917557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:08.351 [2024-12-05 12:56:50.917697] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:25:08.351 [2024-12-05 12:56:50.917712] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:25:08.351 [2024-12-05 12:56:50.917928] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:25:08.351 [2024-12-05 12:56:50.920820] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:25:08.351 [2024-12-05 12:56:50.920917] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:25:08.351 [2024-12-05 12:56:50.921112] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:08.351 NewBaseBdev 00:25:08.351 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.351 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:25:08.351 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:25:08.351 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:08.351 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:08.351 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:08.351 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:08.351 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:08.351 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.351 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:08.351 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.351 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:08.351 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.351 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:08.609 [ 00:25:08.609 { 00:25:08.609 "name": "NewBaseBdev", 00:25:08.609 "aliases": [ 00:25:08.609 "e4015e69-acd4-4aae-a94b-d8bcf73bd977" 00:25:08.609 ], 00:25:08.609 "product_name": "Malloc disk", 00:25:08.609 "block_size": 512, 00:25:08.609 "num_blocks": 65536, 00:25:08.609 "uuid": "e4015e69-acd4-4aae-a94b-d8bcf73bd977", 00:25:08.609 "assigned_rate_limits": { 00:25:08.609 "rw_ios_per_sec": 0, 00:25:08.609 "rw_mbytes_per_sec": 0, 00:25:08.609 "r_mbytes_per_sec": 0, 00:25:08.609 "w_mbytes_per_sec": 0 00:25:08.609 }, 00:25:08.609 "claimed": true, 00:25:08.609 "claim_type": "exclusive_write", 00:25:08.609 "zoned": false, 00:25:08.609 "supported_io_types": { 00:25:08.609 "read": true, 00:25:08.609 "write": true, 00:25:08.609 "unmap": true, 00:25:08.609 "flush": true, 00:25:08.609 "reset": true, 00:25:08.609 "nvme_admin": false, 00:25:08.609 "nvme_io": false, 00:25:08.609 "nvme_io_md": false, 00:25:08.609 "write_zeroes": true, 00:25:08.609 "zcopy": true, 00:25:08.609 "get_zone_info": false, 00:25:08.609 "zone_management": false, 00:25:08.609 "zone_append": false, 00:25:08.609 "compare": false, 00:25:08.609 "compare_and_write": false, 00:25:08.609 "abort": true, 00:25:08.609 "seek_hole": false, 00:25:08.609 "seek_data": false, 00:25:08.609 "copy": true, 00:25:08.609 "nvme_iov_md": false 00:25:08.609 }, 00:25:08.609 "memory_domains": [ 00:25:08.609 { 00:25:08.609 "dma_device_id": "system", 00:25:08.609 "dma_device_type": 1 00:25:08.609 }, 00:25:08.609 { 00:25:08.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:08.609 "dma_device_type": 2 00:25:08.609 } 00:25:08.609 ], 00:25:08.609 "driver_specific": {} 00:25:08.609 } 00:25:08.609 ] 00:25:08.609 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.609 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:08.609 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:25:08.609 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:08.609 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:08.609 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:08.609 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:08.609 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:08.609 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:08.609 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:08.609 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:08.609 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:08.609 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:08.609 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.609 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:08.609 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:08.609 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.609 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:08.609 "name": "Existed_Raid", 00:25:08.609 "uuid": "43805892-90f3-4ecc-be0e-5cd1a0e3be86", 00:25:08.610 "strip_size_kb": 64, 00:25:08.610 "state": "online", 00:25:08.610 "raid_level": "raid5f", 00:25:08.610 "superblock": false, 00:25:08.610 "num_base_bdevs": 3, 00:25:08.610 "num_base_bdevs_discovered": 3, 00:25:08.610 "num_base_bdevs_operational": 3, 00:25:08.610 "base_bdevs_list": [ 00:25:08.610 { 00:25:08.610 "name": "NewBaseBdev", 00:25:08.610 "uuid": "e4015e69-acd4-4aae-a94b-d8bcf73bd977", 00:25:08.610 "is_configured": true, 00:25:08.610 "data_offset": 0, 00:25:08.610 "data_size": 65536 00:25:08.610 }, 00:25:08.610 { 00:25:08.610 "name": "BaseBdev2", 00:25:08.610 "uuid": "7f3b9876-574c-45bb-91dd-9237098abe99", 00:25:08.610 "is_configured": true, 00:25:08.610 "data_offset": 0, 00:25:08.610 "data_size": 65536 00:25:08.610 }, 00:25:08.610 { 00:25:08.610 "name": "BaseBdev3", 00:25:08.610 "uuid": "0bb88c46-5e22-4fe0-840a-95b0dafc086d", 00:25:08.610 "is_configured": true, 00:25:08.610 "data_offset": 0, 00:25:08.610 "data_size": 65536 00:25:08.610 } 00:25:08.610 ] 00:25:08.610 }' 00:25:08.610 12:56:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:08.610 12:56:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:08.868 12:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:25:08.868 12:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:08.868 12:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:08.868 12:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:08.868 12:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:08.868 12:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:08.868 12:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:08.868 12:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.868 12:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:08.868 12:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:08.868 [2024-12-05 12:56:51.264707] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:08.869 12:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.869 12:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:08.869 "name": "Existed_Raid", 00:25:08.869 "aliases": [ 00:25:08.869 "43805892-90f3-4ecc-be0e-5cd1a0e3be86" 00:25:08.869 ], 00:25:08.869 "product_name": "Raid Volume", 00:25:08.869 "block_size": 512, 00:25:08.869 "num_blocks": 131072, 00:25:08.869 "uuid": "43805892-90f3-4ecc-be0e-5cd1a0e3be86", 00:25:08.869 "assigned_rate_limits": { 00:25:08.869 "rw_ios_per_sec": 0, 00:25:08.869 "rw_mbytes_per_sec": 0, 00:25:08.869 "r_mbytes_per_sec": 0, 00:25:08.869 "w_mbytes_per_sec": 0 00:25:08.869 }, 00:25:08.869 "claimed": false, 00:25:08.869 "zoned": false, 00:25:08.869 "supported_io_types": { 00:25:08.869 "read": true, 00:25:08.869 "write": true, 00:25:08.869 "unmap": false, 00:25:08.869 "flush": false, 00:25:08.869 "reset": true, 00:25:08.869 "nvme_admin": false, 00:25:08.869 "nvme_io": false, 00:25:08.869 "nvme_io_md": false, 00:25:08.869 "write_zeroes": true, 00:25:08.869 "zcopy": false, 00:25:08.869 "get_zone_info": false, 00:25:08.869 "zone_management": false, 00:25:08.869 "zone_append": false, 00:25:08.869 "compare": false, 00:25:08.869 "compare_and_write": false, 00:25:08.869 "abort": false, 00:25:08.869 "seek_hole": false, 00:25:08.869 "seek_data": false, 00:25:08.869 "copy": false, 00:25:08.869 "nvme_iov_md": false 00:25:08.869 }, 00:25:08.869 "driver_specific": { 00:25:08.869 "raid": { 00:25:08.869 "uuid": "43805892-90f3-4ecc-be0e-5cd1a0e3be86", 00:25:08.869 "strip_size_kb": 64, 00:25:08.869 "state": "online", 00:25:08.869 "raid_level": "raid5f", 00:25:08.869 "superblock": false, 00:25:08.869 "num_base_bdevs": 3, 00:25:08.869 "num_base_bdevs_discovered": 3, 00:25:08.869 "num_base_bdevs_operational": 3, 00:25:08.869 "base_bdevs_list": [ 00:25:08.869 { 00:25:08.869 "name": "NewBaseBdev", 00:25:08.869 "uuid": "e4015e69-acd4-4aae-a94b-d8bcf73bd977", 00:25:08.869 "is_configured": true, 00:25:08.869 "data_offset": 0, 00:25:08.869 "data_size": 65536 00:25:08.869 }, 00:25:08.869 { 00:25:08.869 "name": "BaseBdev2", 00:25:08.869 "uuid": "7f3b9876-574c-45bb-91dd-9237098abe99", 00:25:08.869 "is_configured": true, 00:25:08.869 "data_offset": 0, 00:25:08.869 "data_size": 65536 00:25:08.869 }, 00:25:08.869 { 00:25:08.869 "name": "BaseBdev3", 00:25:08.869 "uuid": "0bb88c46-5e22-4fe0-840a-95b0dafc086d", 00:25:08.869 "is_configured": true, 00:25:08.869 "data_offset": 0, 00:25:08.869 "data_size": 65536 00:25:08.869 } 00:25:08.869 ] 00:25:08.869 } 00:25:08.869 } 00:25:08.869 }' 00:25:08.869 12:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:08.869 12:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:25:08.869 BaseBdev2 00:25:08.869 BaseBdev3' 00:25:08.869 12:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:08.869 12:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:08.869 12:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:08.869 12:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:25:08.869 12:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.869 12:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:08.869 12:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:08.869 12:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.869 12:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:08.869 12:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:08.869 12:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:08.869 12:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:08.869 12:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:08.869 12:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.869 12:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:08.869 12:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.869 12:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:08.869 12:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:08.869 12:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:08.869 12:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:08.869 12:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:08.869 12:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.869 12:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:08.869 12:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.869 12:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:08.869 12:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:08.869 12:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:08.869 12:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.869 12:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:08.869 [2024-12-05 12:56:51.444554] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:08.869 [2024-12-05 12:56:51.444574] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:08.869 [2024-12-05 12:56:51.444630] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:08.869 [2024-12-05 12:56:51.444851] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:08.869 [2024-12-05 12:56:51.444861] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:25:08.869 12:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.869 12:56:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 77450 00:25:08.869 12:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 77450 ']' 00:25:08.869 12:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 77450 00:25:08.869 12:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:25:09.165 12:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:09.165 12:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77450 00:25:09.165 killing process with pid 77450 00:25:09.165 12:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:09.165 12:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:09.165 12:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77450' 00:25:09.165 12:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 77450 00:25:09.165 [2024-12-05 12:56:51.474129] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:09.165 12:56:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 77450 00:25:09.165 [2024-12-05 12:56:51.622536] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:09.731 12:56:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:25:09.731 00:25:09.731 real 0m7.392s 00:25:09.731 user 0m11.922s 00:25:09.731 sys 0m1.202s 00:25:09.731 ************************************ 00:25:09.731 END TEST raid5f_state_function_test 00:25:09.731 ************************************ 00:25:09.731 12:56:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:09.731 12:56:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:09.731 12:56:52 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:25:09.732 12:56:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:09.732 12:56:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:09.732 12:56:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:09.732 ************************************ 00:25:09.732 START TEST raid5f_state_function_test_sb 00:25:09.732 ************************************ 00:25:09.732 12:56:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:25:09.732 12:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:25:09.732 12:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:25:09.732 12:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:25:09.732 12:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:25:09.732 12:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:25:09.732 12:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:09.732 12:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:25:09.732 12:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:09.732 12:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:09.732 12:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:25:09.732 12:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:09.732 12:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:09.732 12:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:25:09.732 12:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:09.732 12:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:09.732 Process raid pid: 78045 00:25:09.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:09.732 12:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:25:09.732 12:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:25:09.732 12:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:25:09.732 12:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:25:09.732 12:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:25:09.732 12:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:25:09.732 12:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:25:09.732 12:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:25:09.732 12:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:25:09.732 12:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:25:09.732 12:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:25:09.732 12:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=78045 00:25:09.732 12:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78045' 00:25:09.732 12:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 78045 00:25:09.732 12:56:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78045 ']' 00:25:09.732 12:56:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:09.732 12:56:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:09.732 12:56:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:09.732 12:56:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:09.732 12:56:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:09.732 12:56:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:09.732 [2024-12-05 12:56:52.293260] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:25:09.732 [2024-12-05 12:56:52.293368] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:09.990 [2024-12-05 12:56:52.443472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.990 [2024-12-05 12:56:52.525548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.254 [2024-12-05 12:56:52.637346] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:10.254 [2024-12-05 12:56:52.637372] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:10.517 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:10.517 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:25:10.517 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:10.517 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.517 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:10.517 [2024-12-05 12:56:53.093187] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:10.517 [2024-12-05 12:56:53.093237] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:10.517 [2024-12-05 12:56:53.093246] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:10.517 [2024-12-05 12:56:53.093254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:10.517 [2024-12-05 12:56:53.093259] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:10.517 [2024-12-05 12:56:53.093266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:10.517 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.517 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:10.517 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:10.517 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:10.517 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:10.517 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:10.517 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:10.517 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:10.517 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:10.518 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:10.518 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:10.775 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:10.775 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.775 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:10.775 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:10.775 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.775 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:10.775 "name": "Existed_Raid", 00:25:10.775 "uuid": "43ee8f47-9803-4352-b11d-0d1ea85e2de4", 00:25:10.775 "strip_size_kb": 64, 00:25:10.775 "state": "configuring", 00:25:10.775 "raid_level": "raid5f", 00:25:10.775 "superblock": true, 00:25:10.775 "num_base_bdevs": 3, 00:25:10.775 "num_base_bdevs_discovered": 0, 00:25:10.775 "num_base_bdevs_operational": 3, 00:25:10.775 "base_bdevs_list": [ 00:25:10.775 { 00:25:10.775 "name": "BaseBdev1", 00:25:10.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:10.775 "is_configured": false, 00:25:10.775 "data_offset": 0, 00:25:10.775 "data_size": 0 00:25:10.775 }, 00:25:10.775 { 00:25:10.775 "name": "BaseBdev2", 00:25:10.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:10.775 "is_configured": false, 00:25:10.775 "data_offset": 0, 00:25:10.775 "data_size": 0 00:25:10.775 }, 00:25:10.775 { 00:25:10.775 "name": "BaseBdev3", 00:25:10.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:10.775 "is_configured": false, 00:25:10.775 "data_offset": 0, 00:25:10.775 "data_size": 0 00:25:10.775 } 00:25:10.775 ] 00:25:10.775 }' 00:25:10.775 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:10.775 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:11.034 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:11.034 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.034 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:11.034 [2024-12-05 12:56:53.413192] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:11.034 [2024-12-05 12:56:53.413220] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:11.034 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.034 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:11.034 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.034 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:11.034 [2024-12-05 12:56:53.421195] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:11.034 [2024-12-05 12:56:53.421228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:11.034 [2024-12-05 12:56:53.421235] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:11.034 [2024-12-05 12:56:53.421242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:11.034 [2024-12-05 12:56:53.421247] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:11.034 [2024-12-05 12:56:53.421254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:11.034 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.034 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:11.034 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.034 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:11.034 [2024-12-05 12:56:53.449172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:11.034 BaseBdev1 00:25:11.034 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.034 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:25:11.034 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:25:11.034 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:11.034 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:11.034 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:11.034 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:11.034 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:11.034 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.034 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:11.034 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.034 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:11.034 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.034 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:11.034 [ 00:25:11.035 { 00:25:11.035 "name": "BaseBdev1", 00:25:11.035 "aliases": [ 00:25:11.035 "2f9681f3-de32-4c66-a97b-808f94d5b501" 00:25:11.035 ], 00:25:11.035 "product_name": "Malloc disk", 00:25:11.035 "block_size": 512, 00:25:11.035 "num_blocks": 65536, 00:25:11.035 "uuid": "2f9681f3-de32-4c66-a97b-808f94d5b501", 00:25:11.035 "assigned_rate_limits": { 00:25:11.035 "rw_ios_per_sec": 0, 00:25:11.035 "rw_mbytes_per_sec": 0, 00:25:11.035 "r_mbytes_per_sec": 0, 00:25:11.035 "w_mbytes_per_sec": 0 00:25:11.035 }, 00:25:11.035 "claimed": true, 00:25:11.035 "claim_type": "exclusive_write", 00:25:11.035 "zoned": false, 00:25:11.035 "supported_io_types": { 00:25:11.035 "read": true, 00:25:11.035 "write": true, 00:25:11.035 "unmap": true, 00:25:11.035 "flush": true, 00:25:11.035 "reset": true, 00:25:11.035 "nvme_admin": false, 00:25:11.035 "nvme_io": false, 00:25:11.035 "nvme_io_md": false, 00:25:11.035 "write_zeroes": true, 00:25:11.035 "zcopy": true, 00:25:11.035 "get_zone_info": false, 00:25:11.035 "zone_management": false, 00:25:11.035 "zone_append": false, 00:25:11.035 "compare": false, 00:25:11.035 "compare_and_write": false, 00:25:11.035 "abort": true, 00:25:11.035 "seek_hole": false, 00:25:11.035 "seek_data": false, 00:25:11.035 "copy": true, 00:25:11.035 "nvme_iov_md": false 00:25:11.035 }, 00:25:11.035 "memory_domains": [ 00:25:11.035 { 00:25:11.035 "dma_device_id": "system", 00:25:11.035 "dma_device_type": 1 00:25:11.035 }, 00:25:11.035 { 00:25:11.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:11.035 "dma_device_type": 2 00:25:11.035 } 00:25:11.035 ], 00:25:11.035 "driver_specific": {} 00:25:11.035 } 00:25:11.035 ] 00:25:11.035 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.035 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:11.035 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:11.035 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:11.035 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:11.035 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:11.035 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:11.035 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:11.035 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:11.035 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:11.035 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:11.035 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:11.035 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:11.035 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:11.035 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.035 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:11.035 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.035 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:11.035 "name": "Existed_Raid", 00:25:11.035 "uuid": "0f342706-8533-4076-a645-7658e641538f", 00:25:11.035 "strip_size_kb": 64, 00:25:11.035 "state": "configuring", 00:25:11.035 "raid_level": "raid5f", 00:25:11.035 "superblock": true, 00:25:11.035 "num_base_bdevs": 3, 00:25:11.035 "num_base_bdevs_discovered": 1, 00:25:11.035 "num_base_bdevs_operational": 3, 00:25:11.035 "base_bdevs_list": [ 00:25:11.035 { 00:25:11.035 "name": "BaseBdev1", 00:25:11.035 "uuid": "2f9681f3-de32-4c66-a97b-808f94d5b501", 00:25:11.035 "is_configured": true, 00:25:11.035 "data_offset": 2048, 00:25:11.035 "data_size": 63488 00:25:11.035 }, 00:25:11.035 { 00:25:11.035 "name": "BaseBdev2", 00:25:11.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:11.035 "is_configured": false, 00:25:11.035 "data_offset": 0, 00:25:11.035 "data_size": 0 00:25:11.035 }, 00:25:11.035 { 00:25:11.035 "name": "BaseBdev3", 00:25:11.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:11.035 "is_configured": false, 00:25:11.035 "data_offset": 0, 00:25:11.035 "data_size": 0 00:25:11.035 } 00:25:11.035 ] 00:25:11.035 }' 00:25:11.035 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:11.035 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:11.294 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:11.294 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.294 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:11.294 [2024-12-05 12:56:53.797258] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:11.294 [2024-12-05 12:56:53.797391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:25:11.294 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.294 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:11.294 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.294 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:11.294 [2024-12-05 12:56:53.805308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:11.294 [2024-12-05 12:56:53.806830] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:11.294 [2024-12-05 12:56:53.806865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:11.294 [2024-12-05 12:56:53.806872] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:11.294 [2024-12-05 12:56:53.806880] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:11.294 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.294 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:25:11.294 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:11.294 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:11.294 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:11.294 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:11.294 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:11.294 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:11.294 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:11.294 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:11.294 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:11.294 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:11.294 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:11.294 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:11.294 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.294 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:11.294 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:11.294 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.294 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:11.294 "name": "Existed_Raid", 00:25:11.294 "uuid": "b6ef41ff-e194-451e-af71-393facbf45e4", 00:25:11.294 "strip_size_kb": 64, 00:25:11.294 "state": "configuring", 00:25:11.294 "raid_level": "raid5f", 00:25:11.294 "superblock": true, 00:25:11.294 "num_base_bdevs": 3, 00:25:11.294 "num_base_bdevs_discovered": 1, 00:25:11.294 "num_base_bdevs_operational": 3, 00:25:11.294 "base_bdevs_list": [ 00:25:11.294 { 00:25:11.294 "name": "BaseBdev1", 00:25:11.294 "uuid": "2f9681f3-de32-4c66-a97b-808f94d5b501", 00:25:11.294 "is_configured": true, 00:25:11.294 "data_offset": 2048, 00:25:11.294 "data_size": 63488 00:25:11.294 }, 00:25:11.294 { 00:25:11.294 "name": "BaseBdev2", 00:25:11.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:11.294 "is_configured": false, 00:25:11.294 "data_offset": 0, 00:25:11.294 "data_size": 0 00:25:11.294 }, 00:25:11.294 { 00:25:11.294 "name": "BaseBdev3", 00:25:11.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:11.294 "is_configured": false, 00:25:11.294 "data_offset": 0, 00:25:11.294 "data_size": 0 00:25:11.294 } 00:25:11.294 ] 00:25:11.294 }' 00:25:11.294 12:56:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:11.294 12:56:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:11.551 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:11.551 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.551 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:11.809 [2024-12-05 12:56:54.151664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:11.809 BaseBdev2 00:25:11.809 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.809 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:25:11.809 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:25:11.809 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:11.809 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:11.809 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:11.809 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:11.809 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:11.809 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.809 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:11.809 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.809 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:11.809 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.809 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:11.809 [ 00:25:11.809 { 00:25:11.809 "name": "BaseBdev2", 00:25:11.809 "aliases": [ 00:25:11.809 "ac96895e-267c-4651-a3fc-990f183809bd" 00:25:11.809 ], 00:25:11.809 "product_name": "Malloc disk", 00:25:11.809 "block_size": 512, 00:25:11.809 "num_blocks": 65536, 00:25:11.809 "uuid": "ac96895e-267c-4651-a3fc-990f183809bd", 00:25:11.809 "assigned_rate_limits": { 00:25:11.809 "rw_ios_per_sec": 0, 00:25:11.809 "rw_mbytes_per_sec": 0, 00:25:11.809 "r_mbytes_per_sec": 0, 00:25:11.809 "w_mbytes_per_sec": 0 00:25:11.809 }, 00:25:11.809 "claimed": true, 00:25:11.809 "claim_type": "exclusive_write", 00:25:11.809 "zoned": false, 00:25:11.809 "supported_io_types": { 00:25:11.809 "read": true, 00:25:11.809 "write": true, 00:25:11.809 "unmap": true, 00:25:11.809 "flush": true, 00:25:11.809 "reset": true, 00:25:11.809 "nvme_admin": false, 00:25:11.809 "nvme_io": false, 00:25:11.809 "nvme_io_md": false, 00:25:11.809 "write_zeroes": true, 00:25:11.809 "zcopy": true, 00:25:11.809 "get_zone_info": false, 00:25:11.809 "zone_management": false, 00:25:11.809 "zone_append": false, 00:25:11.809 "compare": false, 00:25:11.809 "compare_and_write": false, 00:25:11.809 "abort": true, 00:25:11.809 "seek_hole": false, 00:25:11.809 "seek_data": false, 00:25:11.809 "copy": true, 00:25:11.809 "nvme_iov_md": false 00:25:11.809 }, 00:25:11.809 "memory_domains": [ 00:25:11.809 { 00:25:11.809 "dma_device_id": "system", 00:25:11.809 "dma_device_type": 1 00:25:11.809 }, 00:25:11.809 { 00:25:11.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:11.809 "dma_device_type": 2 00:25:11.809 } 00:25:11.809 ], 00:25:11.809 "driver_specific": {} 00:25:11.809 } 00:25:11.809 ] 00:25:11.809 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.809 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:11.809 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:11.809 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:11.809 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:11.809 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:11.809 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:11.809 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:11.809 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:11.809 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:11.809 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:11.809 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:11.809 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:11.809 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:11.809 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:11.809 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:11.809 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.809 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:11.809 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.809 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:11.809 "name": "Existed_Raid", 00:25:11.809 "uuid": "b6ef41ff-e194-451e-af71-393facbf45e4", 00:25:11.809 "strip_size_kb": 64, 00:25:11.809 "state": "configuring", 00:25:11.809 "raid_level": "raid5f", 00:25:11.809 "superblock": true, 00:25:11.809 "num_base_bdevs": 3, 00:25:11.809 "num_base_bdevs_discovered": 2, 00:25:11.809 "num_base_bdevs_operational": 3, 00:25:11.809 "base_bdevs_list": [ 00:25:11.809 { 00:25:11.809 "name": "BaseBdev1", 00:25:11.809 "uuid": "2f9681f3-de32-4c66-a97b-808f94d5b501", 00:25:11.809 "is_configured": true, 00:25:11.809 "data_offset": 2048, 00:25:11.809 "data_size": 63488 00:25:11.809 }, 00:25:11.809 { 00:25:11.809 "name": "BaseBdev2", 00:25:11.809 "uuid": "ac96895e-267c-4651-a3fc-990f183809bd", 00:25:11.809 "is_configured": true, 00:25:11.809 "data_offset": 2048, 00:25:11.809 "data_size": 63488 00:25:11.809 }, 00:25:11.809 { 00:25:11.809 "name": "BaseBdev3", 00:25:11.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:11.809 "is_configured": false, 00:25:11.809 "data_offset": 0, 00:25:11.809 "data_size": 0 00:25:11.809 } 00:25:11.809 ] 00:25:11.809 }' 00:25:11.809 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:11.809 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:12.067 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:12.067 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.067 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:12.067 [2024-12-05 12:56:54.528470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:12.067 [2024-12-05 12:56:54.528675] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:12.067 [2024-12-05 12:56:54.528691] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:12.067 [2024-12-05 12:56:54.528897] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:12.067 BaseBdev3 00:25:12.067 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.067 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:25:12.067 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:25:12.067 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:12.067 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:12.067 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:12.068 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:12.068 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:12.068 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.068 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:12.068 [2024-12-05 12:56:54.532085] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:12.068 [2024-12-05 12:56:54.532183] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:25:12.068 [2024-12-05 12:56:54.532329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:12.068 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.068 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:12.068 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.068 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:12.068 [ 00:25:12.068 { 00:25:12.068 "name": "BaseBdev3", 00:25:12.068 "aliases": [ 00:25:12.068 "7e4fd626-5073-4e2e-a439-4d6202686544" 00:25:12.068 ], 00:25:12.068 "product_name": "Malloc disk", 00:25:12.068 "block_size": 512, 00:25:12.068 "num_blocks": 65536, 00:25:12.068 "uuid": "7e4fd626-5073-4e2e-a439-4d6202686544", 00:25:12.068 "assigned_rate_limits": { 00:25:12.068 "rw_ios_per_sec": 0, 00:25:12.068 "rw_mbytes_per_sec": 0, 00:25:12.068 "r_mbytes_per_sec": 0, 00:25:12.068 "w_mbytes_per_sec": 0 00:25:12.068 }, 00:25:12.068 "claimed": true, 00:25:12.068 "claim_type": "exclusive_write", 00:25:12.068 "zoned": false, 00:25:12.068 "supported_io_types": { 00:25:12.068 "read": true, 00:25:12.068 "write": true, 00:25:12.068 "unmap": true, 00:25:12.068 "flush": true, 00:25:12.068 "reset": true, 00:25:12.068 "nvme_admin": false, 00:25:12.068 "nvme_io": false, 00:25:12.068 "nvme_io_md": false, 00:25:12.068 "write_zeroes": true, 00:25:12.068 "zcopy": true, 00:25:12.068 "get_zone_info": false, 00:25:12.068 "zone_management": false, 00:25:12.068 "zone_append": false, 00:25:12.068 "compare": false, 00:25:12.068 "compare_and_write": false, 00:25:12.068 "abort": true, 00:25:12.068 "seek_hole": false, 00:25:12.068 "seek_data": false, 00:25:12.068 "copy": true, 00:25:12.068 "nvme_iov_md": false 00:25:12.068 }, 00:25:12.068 "memory_domains": [ 00:25:12.068 { 00:25:12.068 "dma_device_id": "system", 00:25:12.068 "dma_device_type": 1 00:25:12.068 }, 00:25:12.068 { 00:25:12.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:12.068 "dma_device_type": 2 00:25:12.068 } 00:25:12.068 ], 00:25:12.068 "driver_specific": {} 00:25:12.068 } 00:25:12.068 ] 00:25:12.068 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.068 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:12.068 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:12.068 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:12.068 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:25:12.068 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:12.068 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:12.068 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:12.068 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:12.068 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:12.068 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:12.068 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:12.068 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:12.068 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:12.068 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:12.068 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:12.068 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.068 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:12.068 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.068 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:12.068 "name": "Existed_Raid", 00:25:12.068 "uuid": "b6ef41ff-e194-451e-af71-393facbf45e4", 00:25:12.068 "strip_size_kb": 64, 00:25:12.068 "state": "online", 00:25:12.068 "raid_level": "raid5f", 00:25:12.068 "superblock": true, 00:25:12.068 "num_base_bdevs": 3, 00:25:12.068 "num_base_bdevs_discovered": 3, 00:25:12.068 "num_base_bdevs_operational": 3, 00:25:12.068 "base_bdevs_list": [ 00:25:12.068 { 00:25:12.068 "name": "BaseBdev1", 00:25:12.068 "uuid": "2f9681f3-de32-4c66-a97b-808f94d5b501", 00:25:12.068 "is_configured": true, 00:25:12.068 "data_offset": 2048, 00:25:12.068 "data_size": 63488 00:25:12.068 }, 00:25:12.068 { 00:25:12.068 "name": "BaseBdev2", 00:25:12.068 "uuid": "ac96895e-267c-4651-a3fc-990f183809bd", 00:25:12.068 "is_configured": true, 00:25:12.068 "data_offset": 2048, 00:25:12.068 "data_size": 63488 00:25:12.068 }, 00:25:12.068 { 00:25:12.068 "name": "BaseBdev3", 00:25:12.068 "uuid": "7e4fd626-5073-4e2e-a439-4d6202686544", 00:25:12.068 "is_configured": true, 00:25:12.068 "data_offset": 2048, 00:25:12.068 "data_size": 63488 00:25:12.068 } 00:25:12.068 ] 00:25:12.068 }' 00:25:12.068 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:12.068 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:12.326 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:25:12.326 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:12.326 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:12.326 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:12.326 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:25:12.326 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:12.326 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:12.326 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:12.326 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.326 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:12.326 [2024-12-05 12:56:54.855911] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:12.326 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.326 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:12.326 "name": "Existed_Raid", 00:25:12.326 "aliases": [ 00:25:12.326 "b6ef41ff-e194-451e-af71-393facbf45e4" 00:25:12.326 ], 00:25:12.326 "product_name": "Raid Volume", 00:25:12.326 "block_size": 512, 00:25:12.326 "num_blocks": 126976, 00:25:12.326 "uuid": "b6ef41ff-e194-451e-af71-393facbf45e4", 00:25:12.326 "assigned_rate_limits": { 00:25:12.326 "rw_ios_per_sec": 0, 00:25:12.326 "rw_mbytes_per_sec": 0, 00:25:12.326 "r_mbytes_per_sec": 0, 00:25:12.326 "w_mbytes_per_sec": 0 00:25:12.326 }, 00:25:12.326 "claimed": false, 00:25:12.326 "zoned": false, 00:25:12.326 "supported_io_types": { 00:25:12.326 "read": true, 00:25:12.326 "write": true, 00:25:12.326 "unmap": false, 00:25:12.326 "flush": false, 00:25:12.326 "reset": true, 00:25:12.326 "nvme_admin": false, 00:25:12.326 "nvme_io": false, 00:25:12.326 "nvme_io_md": false, 00:25:12.326 "write_zeroes": true, 00:25:12.326 "zcopy": false, 00:25:12.326 "get_zone_info": false, 00:25:12.326 "zone_management": false, 00:25:12.326 "zone_append": false, 00:25:12.326 "compare": false, 00:25:12.326 "compare_and_write": false, 00:25:12.326 "abort": false, 00:25:12.326 "seek_hole": false, 00:25:12.326 "seek_data": false, 00:25:12.326 "copy": false, 00:25:12.326 "nvme_iov_md": false 00:25:12.326 }, 00:25:12.326 "driver_specific": { 00:25:12.326 "raid": { 00:25:12.326 "uuid": "b6ef41ff-e194-451e-af71-393facbf45e4", 00:25:12.326 "strip_size_kb": 64, 00:25:12.326 "state": "online", 00:25:12.326 "raid_level": "raid5f", 00:25:12.326 "superblock": true, 00:25:12.326 "num_base_bdevs": 3, 00:25:12.326 "num_base_bdevs_discovered": 3, 00:25:12.326 "num_base_bdevs_operational": 3, 00:25:12.326 "base_bdevs_list": [ 00:25:12.326 { 00:25:12.326 "name": "BaseBdev1", 00:25:12.326 "uuid": "2f9681f3-de32-4c66-a97b-808f94d5b501", 00:25:12.326 "is_configured": true, 00:25:12.326 "data_offset": 2048, 00:25:12.326 "data_size": 63488 00:25:12.326 }, 00:25:12.326 { 00:25:12.326 "name": "BaseBdev2", 00:25:12.326 "uuid": "ac96895e-267c-4651-a3fc-990f183809bd", 00:25:12.326 "is_configured": true, 00:25:12.326 "data_offset": 2048, 00:25:12.326 "data_size": 63488 00:25:12.326 }, 00:25:12.326 { 00:25:12.326 "name": "BaseBdev3", 00:25:12.327 "uuid": "7e4fd626-5073-4e2e-a439-4d6202686544", 00:25:12.327 "is_configured": true, 00:25:12.327 "data_offset": 2048, 00:25:12.327 "data_size": 63488 00:25:12.327 } 00:25:12.327 ] 00:25:12.327 } 00:25:12.327 } 00:25:12.327 }' 00:25:12.327 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:12.327 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:25:12.327 BaseBdev2 00:25:12.327 BaseBdev3' 00:25:12.327 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:12.585 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:12.585 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:12.585 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:25:12.585 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.585 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:12.585 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:12.585 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.585 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:12.585 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:12.585 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:12.585 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:12.585 12:56:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:12.585 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.585 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:12.585 12:56:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.585 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:12.585 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:12.585 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:12.585 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:12.585 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.585 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:12.585 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:12.585 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.585 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:12.585 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:12.585 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:12.585 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.585 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:12.585 [2024-12-05 12:56:55.043781] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:12.585 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.585 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:25:12.585 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:25:12.585 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:12.585 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:25:12.585 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:25:12.585 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:25:12.585 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:12.585 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:12.585 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:12.585 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:12.585 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:12.585 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:12.585 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:12.585 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:12.585 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:12.585 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:12.585 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:12.585 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.585 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:12.585 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.585 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:12.585 "name": "Existed_Raid", 00:25:12.585 "uuid": "b6ef41ff-e194-451e-af71-393facbf45e4", 00:25:12.585 "strip_size_kb": 64, 00:25:12.585 "state": "online", 00:25:12.585 "raid_level": "raid5f", 00:25:12.585 "superblock": true, 00:25:12.585 "num_base_bdevs": 3, 00:25:12.585 "num_base_bdevs_discovered": 2, 00:25:12.585 "num_base_bdevs_operational": 2, 00:25:12.585 "base_bdevs_list": [ 00:25:12.585 { 00:25:12.585 "name": null, 00:25:12.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:12.585 "is_configured": false, 00:25:12.585 "data_offset": 0, 00:25:12.585 "data_size": 63488 00:25:12.585 }, 00:25:12.585 { 00:25:12.585 "name": "BaseBdev2", 00:25:12.585 "uuid": "ac96895e-267c-4651-a3fc-990f183809bd", 00:25:12.585 "is_configured": true, 00:25:12.585 "data_offset": 2048, 00:25:12.585 "data_size": 63488 00:25:12.585 }, 00:25:12.585 { 00:25:12.585 "name": "BaseBdev3", 00:25:12.585 "uuid": "7e4fd626-5073-4e2e-a439-4d6202686544", 00:25:12.585 "is_configured": true, 00:25:12.585 "data_offset": 2048, 00:25:12.585 "data_size": 63488 00:25:12.585 } 00:25:12.585 ] 00:25:12.585 }' 00:25:12.585 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:12.585 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:12.843 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:25:12.843 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:12.843 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:12.843 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:12.843 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.843 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:13.102 [2024-12-05 12:56:55.441949] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:13.102 [2024-12-05 12:56:55.442163] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:13.102 [2024-12-05 12:56:55.488054] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:13.102 [2024-12-05 12:56:55.528112] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:13.102 [2024-12-05 12:56:55.528149] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:13.102 BaseBdev2 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:13.102 [ 00:25:13.102 { 00:25:13.102 "name": "BaseBdev2", 00:25:13.102 "aliases": [ 00:25:13.102 "99613195-43b9-45e0-b01a-788aff465ee4" 00:25:13.102 ], 00:25:13.102 "product_name": "Malloc disk", 00:25:13.102 "block_size": 512, 00:25:13.102 "num_blocks": 65536, 00:25:13.102 "uuid": "99613195-43b9-45e0-b01a-788aff465ee4", 00:25:13.102 "assigned_rate_limits": { 00:25:13.102 "rw_ios_per_sec": 0, 00:25:13.102 "rw_mbytes_per_sec": 0, 00:25:13.102 "r_mbytes_per_sec": 0, 00:25:13.102 "w_mbytes_per_sec": 0 00:25:13.102 }, 00:25:13.102 "claimed": false, 00:25:13.102 "zoned": false, 00:25:13.102 "supported_io_types": { 00:25:13.102 "read": true, 00:25:13.102 "write": true, 00:25:13.102 "unmap": true, 00:25:13.102 "flush": true, 00:25:13.102 "reset": true, 00:25:13.102 "nvme_admin": false, 00:25:13.102 "nvme_io": false, 00:25:13.102 "nvme_io_md": false, 00:25:13.102 "write_zeroes": true, 00:25:13.102 "zcopy": true, 00:25:13.102 "get_zone_info": false, 00:25:13.102 "zone_management": false, 00:25:13.102 "zone_append": false, 00:25:13.102 "compare": false, 00:25:13.102 "compare_and_write": false, 00:25:13.102 "abort": true, 00:25:13.102 "seek_hole": false, 00:25:13.102 "seek_data": false, 00:25:13.102 "copy": true, 00:25:13.102 "nvme_iov_md": false 00:25:13.102 }, 00:25:13.102 "memory_domains": [ 00:25:13.102 { 00:25:13.102 "dma_device_id": "system", 00:25:13.102 "dma_device_type": 1 00:25:13.102 }, 00:25:13.102 { 00:25:13.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:13.102 "dma_device_type": 2 00:25:13.102 } 00:25:13.102 ], 00:25:13.102 "driver_specific": {} 00:25:13.102 } 00:25:13.102 ] 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:13.102 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.103 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:13.360 BaseBdev3 00:25:13.360 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.360 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:25:13.360 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:25:13.360 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:13.360 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:13.360 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:13.360 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:13.360 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:13.360 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.360 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:13.360 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.360 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:13.360 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.360 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:13.360 [ 00:25:13.360 { 00:25:13.360 "name": "BaseBdev3", 00:25:13.360 "aliases": [ 00:25:13.360 "438bb7d3-f634-4e17-baad-7fc18655765a" 00:25:13.360 ], 00:25:13.360 "product_name": "Malloc disk", 00:25:13.360 "block_size": 512, 00:25:13.360 "num_blocks": 65536, 00:25:13.360 "uuid": "438bb7d3-f634-4e17-baad-7fc18655765a", 00:25:13.360 "assigned_rate_limits": { 00:25:13.360 "rw_ios_per_sec": 0, 00:25:13.360 "rw_mbytes_per_sec": 0, 00:25:13.360 "r_mbytes_per_sec": 0, 00:25:13.360 "w_mbytes_per_sec": 0 00:25:13.360 }, 00:25:13.361 "claimed": false, 00:25:13.361 "zoned": false, 00:25:13.361 "supported_io_types": { 00:25:13.361 "read": true, 00:25:13.361 "write": true, 00:25:13.361 "unmap": true, 00:25:13.361 "flush": true, 00:25:13.361 "reset": true, 00:25:13.361 "nvme_admin": false, 00:25:13.361 "nvme_io": false, 00:25:13.361 "nvme_io_md": false, 00:25:13.361 "write_zeroes": true, 00:25:13.361 "zcopy": true, 00:25:13.361 "get_zone_info": false, 00:25:13.361 "zone_management": false, 00:25:13.361 "zone_append": false, 00:25:13.361 "compare": false, 00:25:13.361 "compare_and_write": false, 00:25:13.361 "abort": true, 00:25:13.361 "seek_hole": false, 00:25:13.361 "seek_data": false, 00:25:13.361 "copy": true, 00:25:13.361 "nvme_iov_md": false 00:25:13.361 }, 00:25:13.361 "memory_domains": [ 00:25:13.361 { 00:25:13.361 "dma_device_id": "system", 00:25:13.361 "dma_device_type": 1 00:25:13.361 }, 00:25:13.361 { 00:25:13.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:13.361 "dma_device_type": 2 00:25:13.361 } 00:25:13.361 ], 00:25:13.361 "driver_specific": {} 00:25:13.361 } 00:25:13.361 ] 00:25:13.361 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.361 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:13.361 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:13.361 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:13.361 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:13.361 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.361 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:13.361 [2024-12-05 12:56:55.730466] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:13.361 [2024-12-05 12:56:55.730516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:13.361 [2024-12-05 12:56:55.730534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:13.361 [2024-12-05 12:56:55.731995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:13.361 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.361 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:13.361 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:13.361 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:13.361 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:13.361 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:13.361 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:13.361 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:13.361 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:13.361 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:13.361 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:13.361 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:13.361 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:13.361 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.361 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:13.361 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.361 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:13.361 "name": "Existed_Raid", 00:25:13.361 "uuid": "0ba5ac15-ac88-489d-8996-b85abe3a30c8", 00:25:13.361 "strip_size_kb": 64, 00:25:13.361 "state": "configuring", 00:25:13.361 "raid_level": "raid5f", 00:25:13.361 "superblock": true, 00:25:13.361 "num_base_bdevs": 3, 00:25:13.361 "num_base_bdevs_discovered": 2, 00:25:13.361 "num_base_bdevs_operational": 3, 00:25:13.361 "base_bdevs_list": [ 00:25:13.361 { 00:25:13.361 "name": "BaseBdev1", 00:25:13.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.361 "is_configured": false, 00:25:13.361 "data_offset": 0, 00:25:13.361 "data_size": 0 00:25:13.361 }, 00:25:13.361 { 00:25:13.361 "name": "BaseBdev2", 00:25:13.361 "uuid": "99613195-43b9-45e0-b01a-788aff465ee4", 00:25:13.361 "is_configured": true, 00:25:13.361 "data_offset": 2048, 00:25:13.361 "data_size": 63488 00:25:13.361 }, 00:25:13.361 { 00:25:13.361 "name": "BaseBdev3", 00:25:13.361 "uuid": "438bb7d3-f634-4e17-baad-7fc18655765a", 00:25:13.361 "is_configured": true, 00:25:13.361 "data_offset": 2048, 00:25:13.361 "data_size": 63488 00:25:13.361 } 00:25:13.361 ] 00:25:13.361 }' 00:25:13.361 12:56:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:13.361 12:56:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:13.618 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:25:13.618 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.618 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:13.618 [2024-12-05 12:56:56.054541] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:13.618 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.618 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:13.618 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:13.618 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:13.618 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:13.618 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:13.618 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:13.618 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:13.618 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:13.618 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:13.618 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:13.618 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:13.618 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.618 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:13.618 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:13.618 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.618 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:13.618 "name": "Existed_Raid", 00:25:13.618 "uuid": "0ba5ac15-ac88-489d-8996-b85abe3a30c8", 00:25:13.618 "strip_size_kb": 64, 00:25:13.618 "state": "configuring", 00:25:13.618 "raid_level": "raid5f", 00:25:13.618 "superblock": true, 00:25:13.618 "num_base_bdevs": 3, 00:25:13.618 "num_base_bdevs_discovered": 1, 00:25:13.618 "num_base_bdevs_operational": 3, 00:25:13.618 "base_bdevs_list": [ 00:25:13.618 { 00:25:13.618 "name": "BaseBdev1", 00:25:13.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.618 "is_configured": false, 00:25:13.618 "data_offset": 0, 00:25:13.619 "data_size": 0 00:25:13.619 }, 00:25:13.619 { 00:25:13.619 "name": null, 00:25:13.619 "uuid": "99613195-43b9-45e0-b01a-788aff465ee4", 00:25:13.619 "is_configured": false, 00:25:13.619 "data_offset": 0, 00:25:13.619 "data_size": 63488 00:25:13.619 }, 00:25:13.619 { 00:25:13.619 "name": "BaseBdev3", 00:25:13.619 "uuid": "438bb7d3-f634-4e17-baad-7fc18655765a", 00:25:13.619 "is_configured": true, 00:25:13.619 "data_offset": 2048, 00:25:13.619 "data_size": 63488 00:25:13.619 } 00:25:13.619 ] 00:25:13.619 }' 00:25:13.619 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:13.619 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:13.877 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:13.877 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.877 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:13.877 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:13.877 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.877 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:25:13.877 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:13.877 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.877 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:13.877 [2024-12-05 12:56:56.413305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:13.877 BaseBdev1 00:25:13.877 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.877 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:25:13.877 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:25:13.877 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:13.877 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:13.877 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:13.877 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:13.877 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:13.877 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.877 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:13.877 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.877 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:13.877 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.877 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:13.877 [ 00:25:13.877 { 00:25:13.877 "name": "BaseBdev1", 00:25:13.877 "aliases": [ 00:25:13.877 "13f54281-5422-436c-bfc3-ddd62f9eebc3" 00:25:13.877 ], 00:25:13.877 "product_name": "Malloc disk", 00:25:13.877 "block_size": 512, 00:25:13.877 "num_blocks": 65536, 00:25:13.877 "uuid": "13f54281-5422-436c-bfc3-ddd62f9eebc3", 00:25:13.877 "assigned_rate_limits": { 00:25:13.877 "rw_ios_per_sec": 0, 00:25:13.877 "rw_mbytes_per_sec": 0, 00:25:13.877 "r_mbytes_per_sec": 0, 00:25:13.877 "w_mbytes_per_sec": 0 00:25:13.877 }, 00:25:13.877 "claimed": true, 00:25:13.877 "claim_type": "exclusive_write", 00:25:13.877 "zoned": false, 00:25:13.877 "supported_io_types": { 00:25:13.877 "read": true, 00:25:13.877 "write": true, 00:25:13.877 "unmap": true, 00:25:13.877 "flush": true, 00:25:13.877 "reset": true, 00:25:13.877 "nvme_admin": false, 00:25:13.877 "nvme_io": false, 00:25:13.877 "nvme_io_md": false, 00:25:13.877 "write_zeroes": true, 00:25:13.877 "zcopy": true, 00:25:13.877 "get_zone_info": false, 00:25:13.877 "zone_management": false, 00:25:13.877 "zone_append": false, 00:25:13.877 "compare": false, 00:25:13.878 "compare_and_write": false, 00:25:13.878 "abort": true, 00:25:13.878 "seek_hole": false, 00:25:13.878 "seek_data": false, 00:25:13.878 "copy": true, 00:25:13.878 "nvme_iov_md": false 00:25:13.878 }, 00:25:13.878 "memory_domains": [ 00:25:13.878 { 00:25:13.878 "dma_device_id": "system", 00:25:13.878 "dma_device_type": 1 00:25:13.878 }, 00:25:13.878 { 00:25:13.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:13.878 "dma_device_type": 2 00:25:13.878 } 00:25:13.878 ], 00:25:13.878 "driver_specific": {} 00:25:13.878 } 00:25:13.878 ] 00:25:13.878 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.878 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:13.878 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:13.878 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:13.878 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:13.878 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:13.878 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:13.878 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:13.878 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:13.878 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:13.878 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:13.878 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:13.878 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:13.878 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:13.878 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.878 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:13.878 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.137 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:14.137 "name": "Existed_Raid", 00:25:14.137 "uuid": "0ba5ac15-ac88-489d-8996-b85abe3a30c8", 00:25:14.137 "strip_size_kb": 64, 00:25:14.137 "state": "configuring", 00:25:14.137 "raid_level": "raid5f", 00:25:14.137 "superblock": true, 00:25:14.137 "num_base_bdevs": 3, 00:25:14.137 "num_base_bdevs_discovered": 2, 00:25:14.137 "num_base_bdevs_operational": 3, 00:25:14.137 "base_bdevs_list": [ 00:25:14.137 { 00:25:14.137 "name": "BaseBdev1", 00:25:14.137 "uuid": "13f54281-5422-436c-bfc3-ddd62f9eebc3", 00:25:14.137 "is_configured": true, 00:25:14.137 "data_offset": 2048, 00:25:14.137 "data_size": 63488 00:25:14.137 }, 00:25:14.137 { 00:25:14.137 "name": null, 00:25:14.137 "uuid": "99613195-43b9-45e0-b01a-788aff465ee4", 00:25:14.137 "is_configured": false, 00:25:14.137 "data_offset": 0, 00:25:14.137 "data_size": 63488 00:25:14.137 }, 00:25:14.137 { 00:25:14.137 "name": "BaseBdev3", 00:25:14.137 "uuid": "438bb7d3-f634-4e17-baad-7fc18655765a", 00:25:14.137 "is_configured": true, 00:25:14.137 "data_offset": 2048, 00:25:14.137 "data_size": 63488 00:25:14.137 } 00:25:14.137 ] 00:25:14.137 }' 00:25:14.137 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:14.137 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:14.396 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:14.396 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:14.396 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.396 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:14.396 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.396 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:25:14.396 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:25:14.396 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.396 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:14.396 [2024-12-05 12:56:56.769441] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:14.396 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.396 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:14.396 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:14.396 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:14.396 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:14.396 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:14.396 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:14.396 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:14.396 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:14.396 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:14.396 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:14.396 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:14.396 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:14.396 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.396 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:14.397 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.397 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:14.397 "name": "Existed_Raid", 00:25:14.397 "uuid": "0ba5ac15-ac88-489d-8996-b85abe3a30c8", 00:25:14.397 "strip_size_kb": 64, 00:25:14.397 "state": "configuring", 00:25:14.397 "raid_level": "raid5f", 00:25:14.397 "superblock": true, 00:25:14.397 "num_base_bdevs": 3, 00:25:14.397 "num_base_bdevs_discovered": 1, 00:25:14.397 "num_base_bdevs_operational": 3, 00:25:14.397 "base_bdevs_list": [ 00:25:14.397 { 00:25:14.397 "name": "BaseBdev1", 00:25:14.397 "uuid": "13f54281-5422-436c-bfc3-ddd62f9eebc3", 00:25:14.397 "is_configured": true, 00:25:14.397 "data_offset": 2048, 00:25:14.397 "data_size": 63488 00:25:14.397 }, 00:25:14.397 { 00:25:14.397 "name": null, 00:25:14.397 "uuid": "99613195-43b9-45e0-b01a-788aff465ee4", 00:25:14.397 "is_configured": false, 00:25:14.397 "data_offset": 0, 00:25:14.397 "data_size": 63488 00:25:14.397 }, 00:25:14.397 { 00:25:14.397 "name": null, 00:25:14.397 "uuid": "438bb7d3-f634-4e17-baad-7fc18655765a", 00:25:14.397 "is_configured": false, 00:25:14.397 "data_offset": 0, 00:25:14.397 "data_size": 63488 00:25:14.397 } 00:25:14.397 ] 00:25:14.397 }' 00:25:14.397 12:56:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:14.397 12:56:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:14.655 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:14.655 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:14.655 12:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.655 12:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:14.655 12:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.655 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:25:14.655 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:14.655 12:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.655 12:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:14.655 [2024-12-05 12:56:57.105527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:14.655 12:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.655 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:14.655 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:14.655 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:14.655 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:14.655 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:14.655 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:14.655 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:14.655 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:14.656 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:14.656 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:14.656 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:14.656 12:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.656 12:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:14.656 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:14.656 12:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.656 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:14.656 "name": "Existed_Raid", 00:25:14.656 "uuid": "0ba5ac15-ac88-489d-8996-b85abe3a30c8", 00:25:14.656 "strip_size_kb": 64, 00:25:14.656 "state": "configuring", 00:25:14.656 "raid_level": "raid5f", 00:25:14.656 "superblock": true, 00:25:14.656 "num_base_bdevs": 3, 00:25:14.656 "num_base_bdevs_discovered": 2, 00:25:14.656 "num_base_bdevs_operational": 3, 00:25:14.656 "base_bdevs_list": [ 00:25:14.656 { 00:25:14.656 "name": "BaseBdev1", 00:25:14.656 "uuid": "13f54281-5422-436c-bfc3-ddd62f9eebc3", 00:25:14.656 "is_configured": true, 00:25:14.656 "data_offset": 2048, 00:25:14.656 "data_size": 63488 00:25:14.656 }, 00:25:14.656 { 00:25:14.656 "name": null, 00:25:14.656 "uuid": "99613195-43b9-45e0-b01a-788aff465ee4", 00:25:14.656 "is_configured": false, 00:25:14.656 "data_offset": 0, 00:25:14.656 "data_size": 63488 00:25:14.656 }, 00:25:14.656 { 00:25:14.656 "name": "BaseBdev3", 00:25:14.656 "uuid": "438bb7d3-f634-4e17-baad-7fc18655765a", 00:25:14.656 "is_configured": true, 00:25:14.656 "data_offset": 2048, 00:25:14.656 "data_size": 63488 00:25:14.656 } 00:25:14.656 ] 00:25:14.656 }' 00:25:14.656 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:14.656 12:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:14.914 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:14.914 12:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.914 12:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:14.914 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:14.914 12:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.914 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:25:14.914 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:14.914 12:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.914 12:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:14.914 [2024-12-05 12:56:57.437578] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:14.914 12:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.914 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:14.914 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:14.914 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:14.914 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:14.914 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:14.914 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:14.914 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:14.914 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:14.914 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:14.914 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:14.915 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:14.915 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:14.915 12:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.915 12:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:15.173 12:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.173 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:15.173 "name": "Existed_Raid", 00:25:15.174 "uuid": "0ba5ac15-ac88-489d-8996-b85abe3a30c8", 00:25:15.174 "strip_size_kb": 64, 00:25:15.174 "state": "configuring", 00:25:15.174 "raid_level": "raid5f", 00:25:15.174 "superblock": true, 00:25:15.174 "num_base_bdevs": 3, 00:25:15.174 "num_base_bdevs_discovered": 1, 00:25:15.174 "num_base_bdevs_operational": 3, 00:25:15.174 "base_bdevs_list": [ 00:25:15.174 { 00:25:15.174 "name": null, 00:25:15.174 "uuid": "13f54281-5422-436c-bfc3-ddd62f9eebc3", 00:25:15.174 "is_configured": false, 00:25:15.174 "data_offset": 0, 00:25:15.174 "data_size": 63488 00:25:15.174 }, 00:25:15.174 { 00:25:15.174 "name": null, 00:25:15.174 "uuid": "99613195-43b9-45e0-b01a-788aff465ee4", 00:25:15.174 "is_configured": false, 00:25:15.174 "data_offset": 0, 00:25:15.174 "data_size": 63488 00:25:15.174 }, 00:25:15.174 { 00:25:15.174 "name": "BaseBdev3", 00:25:15.174 "uuid": "438bb7d3-f634-4e17-baad-7fc18655765a", 00:25:15.174 "is_configured": true, 00:25:15.174 "data_offset": 2048, 00:25:15.174 "data_size": 63488 00:25:15.174 } 00:25:15.174 ] 00:25:15.174 }' 00:25:15.174 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:15.174 12:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:15.432 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:15.432 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:15.432 12:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.432 12:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:15.432 12:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.432 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:25:15.432 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:15.432 12:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.432 12:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:15.432 [2024-12-05 12:56:57.819476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:15.432 12:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.432 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:15.432 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:15.432 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:15.432 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:15.433 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:15.433 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:15.433 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:15.433 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:15.433 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:15.433 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:15.433 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:15.433 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:15.433 12:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.433 12:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:15.433 12:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.433 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:15.433 "name": "Existed_Raid", 00:25:15.433 "uuid": "0ba5ac15-ac88-489d-8996-b85abe3a30c8", 00:25:15.433 "strip_size_kb": 64, 00:25:15.433 "state": "configuring", 00:25:15.433 "raid_level": "raid5f", 00:25:15.433 "superblock": true, 00:25:15.433 "num_base_bdevs": 3, 00:25:15.433 "num_base_bdevs_discovered": 2, 00:25:15.433 "num_base_bdevs_operational": 3, 00:25:15.433 "base_bdevs_list": [ 00:25:15.433 { 00:25:15.433 "name": null, 00:25:15.433 "uuid": "13f54281-5422-436c-bfc3-ddd62f9eebc3", 00:25:15.433 "is_configured": false, 00:25:15.433 "data_offset": 0, 00:25:15.433 "data_size": 63488 00:25:15.433 }, 00:25:15.433 { 00:25:15.433 "name": "BaseBdev2", 00:25:15.433 "uuid": "99613195-43b9-45e0-b01a-788aff465ee4", 00:25:15.433 "is_configured": true, 00:25:15.433 "data_offset": 2048, 00:25:15.433 "data_size": 63488 00:25:15.433 }, 00:25:15.433 { 00:25:15.433 "name": "BaseBdev3", 00:25:15.433 "uuid": "438bb7d3-f634-4e17-baad-7fc18655765a", 00:25:15.433 "is_configured": true, 00:25:15.433 "data_offset": 2048, 00:25:15.433 "data_size": 63488 00:25:15.433 } 00:25:15.433 ] 00:25:15.433 }' 00:25:15.433 12:56:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:15.433 12:56:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:15.692 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:15.692 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:15.692 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.692 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:15.692 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.692 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:25:15.692 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:15.692 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:15.692 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.692 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:15.692 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.692 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 13f54281-5422-436c-bfc3-ddd62f9eebc3 00:25:15.692 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.692 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:15.692 [2024-12-05 12:56:58.205997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:15.692 [2024-12-05 12:56:58.206153] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:25:15.692 [2024-12-05 12:56:58.206166] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:15.692 [2024-12-05 12:56:58.206362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:25:15.692 NewBaseBdev 00:25:15.692 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.692 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:25:15.692 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:25:15.692 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:15.692 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:15.692 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:15.692 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:15.692 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:15.692 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.692 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:15.692 [2024-12-05 12:56:58.209306] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:25:15.692 [2024-12-05 12:56:58.209321] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:25:15.692 [2024-12-05 12:56:58.209423] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:15.692 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.692 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:15.692 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.692 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:15.692 [ 00:25:15.692 { 00:25:15.692 "name": "NewBaseBdev", 00:25:15.692 "aliases": [ 00:25:15.692 "13f54281-5422-436c-bfc3-ddd62f9eebc3" 00:25:15.692 ], 00:25:15.692 "product_name": "Malloc disk", 00:25:15.692 "block_size": 512, 00:25:15.692 "num_blocks": 65536, 00:25:15.693 "uuid": "13f54281-5422-436c-bfc3-ddd62f9eebc3", 00:25:15.693 "assigned_rate_limits": { 00:25:15.693 "rw_ios_per_sec": 0, 00:25:15.693 "rw_mbytes_per_sec": 0, 00:25:15.693 "r_mbytes_per_sec": 0, 00:25:15.693 "w_mbytes_per_sec": 0 00:25:15.693 }, 00:25:15.693 "claimed": true, 00:25:15.693 "claim_type": "exclusive_write", 00:25:15.693 "zoned": false, 00:25:15.693 "supported_io_types": { 00:25:15.693 "read": true, 00:25:15.693 "write": true, 00:25:15.693 "unmap": true, 00:25:15.693 "flush": true, 00:25:15.693 "reset": true, 00:25:15.693 "nvme_admin": false, 00:25:15.693 "nvme_io": false, 00:25:15.693 "nvme_io_md": false, 00:25:15.693 "write_zeroes": true, 00:25:15.693 "zcopy": true, 00:25:15.693 "get_zone_info": false, 00:25:15.693 "zone_management": false, 00:25:15.693 "zone_append": false, 00:25:15.693 "compare": false, 00:25:15.693 "compare_and_write": false, 00:25:15.693 "abort": true, 00:25:15.693 "seek_hole": false, 00:25:15.693 "seek_data": false, 00:25:15.693 "copy": true, 00:25:15.693 "nvme_iov_md": false 00:25:15.693 }, 00:25:15.693 "memory_domains": [ 00:25:15.693 { 00:25:15.693 "dma_device_id": "system", 00:25:15.693 "dma_device_type": 1 00:25:15.693 }, 00:25:15.693 { 00:25:15.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:15.693 "dma_device_type": 2 00:25:15.693 } 00:25:15.693 ], 00:25:15.693 "driver_specific": {} 00:25:15.693 } 00:25:15.693 ] 00:25:15.693 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.693 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:15.693 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:25:15.693 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:15.693 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:15.693 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:15.693 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:15.693 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:15.693 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:15.693 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:15.693 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:15.693 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:15.693 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:15.693 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:15.693 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.693 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:15.693 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.693 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:15.693 "name": "Existed_Raid", 00:25:15.693 "uuid": "0ba5ac15-ac88-489d-8996-b85abe3a30c8", 00:25:15.693 "strip_size_kb": 64, 00:25:15.693 "state": "online", 00:25:15.693 "raid_level": "raid5f", 00:25:15.693 "superblock": true, 00:25:15.693 "num_base_bdevs": 3, 00:25:15.693 "num_base_bdevs_discovered": 3, 00:25:15.693 "num_base_bdevs_operational": 3, 00:25:15.693 "base_bdevs_list": [ 00:25:15.693 { 00:25:15.693 "name": "NewBaseBdev", 00:25:15.693 "uuid": "13f54281-5422-436c-bfc3-ddd62f9eebc3", 00:25:15.693 "is_configured": true, 00:25:15.693 "data_offset": 2048, 00:25:15.693 "data_size": 63488 00:25:15.693 }, 00:25:15.693 { 00:25:15.693 "name": "BaseBdev2", 00:25:15.693 "uuid": "99613195-43b9-45e0-b01a-788aff465ee4", 00:25:15.693 "is_configured": true, 00:25:15.693 "data_offset": 2048, 00:25:15.693 "data_size": 63488 00:25:15.693 }, 00:25:15.693 { 00:25:15.693 "name": "BaseBdev3", 00:25:15.693 "uuid": "438bb7d3-f634-4e17-baad-7fc18655765a", 00:25:15.693 "is_configured": true, 00:25:15.693 "data_offset": 2048, 00:25:15.693 "data_size": 63488 00:25:15.693 } 00:25:15.693 ] 00:25:15.693 }' 00:25:15.693 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:15.693 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:15.952 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:25:15.952 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:15.952 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:15.952 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:15.952 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:25:15.952 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:15.952 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:15.952 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.952 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:15.952 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:15.952 [2024-12-05 12:56:58.528912] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:16.211 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.211 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:16.211 "name": "Existed_Raid", 00:25:16.211 "aliases": [ 00:25:16.211 "0ba5ac15-ac88-489d-8996-b85abe3a30c8" 00:25:16.211 ], 00:25:16.211 "product_name": "Raid Volume", 00:25:16.211 "block_size": 512, 00:25:16.211 "num_blocks": 126976, 00:25:16.211 "uuid": "0ba5ac15-ac88-489d-8996-b85abe3a30c8", 00:25:16.211 "assigned_rate_limits": { 00:25:16.211 "rw_ios_per_sec": 0, 00:25:16.211 "rw_mbytes_per_sec": 0, 00:25:16.211 "r_mbytes_per_sec": 0, 00:25:16.211 "w_mbytes_per_sec": 0 00:25:16.211 }, 00:25:16.211 "claimed": false, 00:25:16.211 "zoned": false, 00:25:16.211 "supported_io_types": { 00:25:16.211 "read": true, 00:25:16.211 "write": true, 00:25:16.211 "unmap": false, 00:25:16.211 "flush": false, 00:25:16.211 "reset": true, 00:25:16.211 "nvme_admin": false, 00:25:16.211 "nvme_io": false, 00:25:16.211 "nvme_io_md": false, 00:25:16.211 "write_zeroes": true, 00:25:16.211 "zcopy": false, 00:25:16.211 "get_zone_info": false, 00:25:16.211 "zone_management": false, 00:25:16.211 "zone_append": false, 00:25:16.211 "compare": false, 00:25:16.211 "compare_and_write": false, 00:25:16.211 "abort": false, 00:25:16.211 "seek_hole": false, 00:25:16.211 "seek_data": false, 00:25:16.211 "copy": false, 00:25:16.211 "nvme_iov_md": false 00:25:16.211 }, 00:25:16.211 "driver_specific": { 00:25:16.211 "raid": { 00:25:16.211 "uuid": "0ba5ac15-ac88-489d-8996-b85abe3a30c8", 00:25:16.211 "strip_size_kb": 64, 00:25:16.211 "state": "online", 00:25:16.211 "raid_level": "raid5f", 00:25:16.211 "superblock": true, 00:25:16.211 "num_base_bdevs": 3, 00:25:16.211 "num_base_bdevs_discovered": 3, 00:25:16.211 "num_base_bdevs_operational": 3, 00:25:16.211 "base_bdevs_list": [ 00:25:16.211 { 00:25:16.211 "name": "NewBaseBdev", 00:25:16.211 "uuid": "13f54281-5422-436c-bfc3-ddd62f9eebc3", 00:25:16.211 "is_configured": true, 00:25:16.211 "data_offset": 2048, 00:25:16.211 "data_size": 63488 00:25:16.211 }, 00:25:16.211 { 00:25:16.211 "name": "BaseBdev2", 00:25:16.211 "uuid": "99613195-43b9-45e0-b01a-788aff465ee4", 00:25:16.211 "is_configured": true, 00:25:16.211 "data_offset": 2048, 00:25:16.211 "data_size": 63488 00:25:16.211 }, 00:25:16.211 { 00:25:16.211 "name": "BaseBdev3", 00:25:16.211 "uuid": "438bb7d3-f634-4e17-baad-7fc18655765a", 00:25:16.211 "is_configured": true, 00:25:16.211 "data_offset": 2048, 00:25:16.211 "data_size": 63488 00:25:16.211 } 00:25:16.211 ] 00:25:16.211 } 00:25:16.211 } 00:25:16.211 }' 00:25:16.211 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:16.211 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:25:16.211 BaseBdev2 00:25:16.211 BaseBdev3' 00:25:16.211 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:16.211 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:16.211 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:16.211 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:25:16.211 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.211 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:16.211 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:16.211 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.211 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:16.211 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:16.211 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:16.211 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:16.211 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:16.211 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.211 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:16.211 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.211 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:16.211 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:16.211 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:16.211 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:16.211 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:16.211 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.211 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:16.211 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.211 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:16.211 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:16.211 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:16.211 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.211 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:16.211 [2024-12-05 12:56:58.716779] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:16.211 [2024-12-05 12:56:58.716799] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:16.211 [2024-12-05 12:56:58.716859] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:16.211 [2024-12-05 12:56:58.717079] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:16.211 [2024-12-05 12:56:58.717090] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:25:16.211 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.212 12:56:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 78045 00:25:16.212 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78045 ']' 00:25:16.212 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 78045 00:25:16.212 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:25:16.212 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:16.212 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78045 00:25:16.212 killing process with pid 78045 00:25:16.212 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:16.212 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:16.212 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78045' 00:25:16.212 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 78045 00:25:16.212 [2024-12-05 12:56:58.748053] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:16.212 12:56:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 78045 00:25:16.470 [2024-12-05 12:56:58.893120] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:17.037 12:56:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:25:17.037 00:25:17.037 real 0m7.238s 00:25:17.037 user 0m11.657s 00:25:17.037 sys 0m1.223s 00:25:17.037 12:56:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:17.037 ************************************ 00:25:17.037 END TEST raid5f_state_function_test_sb 00:25:17.037 ************************************ 00:25:17.037 12:56:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:17.037 12:56:59 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:25:17.037 12:56:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:17.037 12:56:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:17.037 12:56:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:17.037 ************************************ 00:25:17.037 START TEST raid5f_superblock_test 00:25:17.037 ************************************ 00:25:17.037 12:56:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:25:17.037 12:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:25:17.037 12:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:25:17.037 12:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:25:17.037 12:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:25:17.037 12:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:25:17.037 12:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:25:17.037 12:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:25:17.037 12:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:25:17.037 12:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:25:17.037 12:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:25:17.037 12:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:25:17.037 12:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:25:17.037 12:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:25:17.037 12:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:25:17.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:17.037 12:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:25:17.037 12:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:25:17.037 12:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=78632 00:25:17.037 12:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 78632 00:25:17.037 12:56:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 78632 ']' 00:25:17.037 12:56:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:17.037 12:56:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:17.037 12:56:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:17.037 12:56:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:17.037 12:56:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:17.037 12:56:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:25:17.037 [2024-12-05 12:56:59.572452] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:25:17.037 [2024-12-05 12:56:59.572757] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78632 ] 00:25:17.295 [2024-12-05 12:56:59.729585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.296 [2024-12-05 12:56:59.829200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.563 [2024-12-05 12:56:59.966031] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:17.563 [2024-12-05 12:56:59.966166] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:18.213 12:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:18.213 12:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:25:18.213 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:25:18.213 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:18.213 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:25:18.213 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:25:18.213 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:18.213 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:18.213 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:18.213 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:18.213 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:25:18.213 12:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.213 12:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.213 malloc1 00:25:18.213 12:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.213 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:18.213 12:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.213 12:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.213 [2024-12-05 12:57:00.461887] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:18.213 [2024-12-05 12:57:00.462057] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:18.213 [2024-12-05 12:57:00.462135] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:18.213 [2024-12-05 12:57:00.462192] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:18.213 [2024-12-05 12:57:00.464384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:18.213 [2024-12-05 12:57:00.464505] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:18.213 pt1 00:25:18.213 12:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.213 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:18.213 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:18.213 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:25:18.213 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:25:18.213 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:18.213 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:18.213 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:18.213 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:18.213 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:25:18.213 12:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.213 12:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.213 malloc2 00:25:18.213 12:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.213 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:18.213 12:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.213 12:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.213 [2024-12-05 12:57:00.501799] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:18.213 [2024-12-05 12:57:00.501842] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:18.213 [2024-12-05 12:57:00.501865] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:18.213 [2024-12-05 12:57:00.501873] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:18.214 [2024-12-05 12:57:00.503948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:18.214 [2024-12-05 12:57:00.503980] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:18.214 pt2 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.214 malloc3 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.214 [2024-12-05 12:57:00.548509] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:18.214 [2024-12-05 12:57:00.548588] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:18.214 [2024-12-05 12:57:00.548609] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:18.214 [2024-12-05 12:57:00.548618] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:18.214 [2024-12-05 12:57:00.550676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:18.214 [2024-12-05 12:57:00.550708] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:18.214 pt3 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.214 [2024-12-05 12:57:00.556557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:18.214 [2024-12-05 12:57:00.558341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:18.214 [2024-12-05 12:57:00.558399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:18.214 [2024-12-05 12:57:00.558677] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:18.214 [2024-12-05 12:57:00.558759] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:18.214 [2024-12-05 12:57:00.559018] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:18.214 [2024-12-05 12:57:00.562831] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:18.214 [2024-12-05 12:57:00.562917] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:18.214 [2024-12-05 12:57:00.563146] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:18.214 "name": "raid_bdev1", 00:25:18.214 "uuid": "90856cbc-b196-4bc0-aee9-8949ea66174c", 00:25:18.214 "strip_size_kb": 64, 00:25:18.214 "state": "online", 00:25:18.214 "raid_level": "raid5f", 00:25:18.214 "superblock": true, 00:25:18.214 "num_base_bdevs": 3, 00:25:18.214 "num_base_bdevs_discovered": 3, 00:25:18.214 "num_base_bdevs_operational": 3, 00:25:18.214 "base_bdevs_list": [ 00:25:18.214 { 00:25:18.214 "name": "pt1", 00:25:18.214 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:18.214 "is_configured": true, 00:25:18.214 "data_offset": 2048, 00:25:18.214 "data_size": 63488 00:25:18.214 }, 00:25:18.214 { 00:25:18.214 "name": "pt2", 00:25:18.214 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:18.214 "is_configured": true, 00:25:18.214 "data_offset": 2048, 00:25:18.214 "data_size": 63488 00:25:18.214 }, 00:25:18.214 { 00:25:18.214 "name": "pt3", 00:25:18.214 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:18.214 "is_configured": true, 00:25:18.214 "data_offset": 2048, 00:25:18.214 "data_size": 63488 00:25:18.214 } 00:25:18.214 ] 00:25:18.214 }' 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:18.214 12:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.490 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:25:18.490 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:18.490 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:18.490 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:18.490 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:18.490 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:18.490 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:18.490 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:18.490 12:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.490 12:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.490 [2024-12-05 12:57:00.887801] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:18.490 12:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.490 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:18.490 "name": "raid_bdev1", 00:25:18.490 "aliases": [ 00:25:18.490 "90856cbc-b196-4bc0-aee9-8949ea66174c" 00:25:18.490 ], 00:25:18.490 "product_name": "Raid Volume", 00:25:18.490 "block_size": 512, 00:25:18.490 "num_blocks": 126976, 00:25:18.490 "uuid": "90856cbc-b196-4bc0-aee9-8949ea66174c", 00:25:18.490 "assigned_rate_limits": { 00:25:18.490 "rw_ios_per_sec": 0, 00:25:18.490 "rw_mbytes_per_sec": 0, 00:25:18.490 "r_mbytes_per_sec": 0, 00:25:18.490 "w_mbytes_per_sec": 0 00:25:18.490 }, 00:25:18.490 "claimed": false, 00:25:18.490 "zoned": false, 00:25:18.490 "supported_io_types": { 00:25:18.490 "read": true, 00:25:18.490 "write": true, 00:25:18.490 "unmap": false, 00:25:18.491 "flush": false, 00:25:18.491 "reset": true, 00:25:18.491 "nvme_admin": false, 00:25:18.491 "nvme_io": false, 00:25:18.491 "nvme_io_md": false, 00:25:18.491 "write_zeroes": true, 00:25:18.491 "zcopy": false, 00:25:18.491 "get_zone_info": false, 00:25:18.491 "zone_management": false, 00:25:18.491 "zone_append": false, 00:25:18.491 "compare": false, 00:25:18.491 "compare_and_write": false, 00:25:18.491 "abort": false, 00:25:18.491 "seek_hole": false, 00:25:18.491 "seek_data": false, 00:25:18.491 "copy": false, 00:25:18.491 "nvme_iov_md": false 00:25:18.491 }, 00:25:18.491 "driver_specific": { 00:25:18.491 "raid": { 00:25:18.491 "uuid": "90856cbc-b196-4bc0-aee9-8949ea66174c", 00:25:18.491 "strip_size_kb": 64, 00:25:18.491 "state": "online", 00:25:18.491 "raid_level": "raid5f", 00:25:18.491 "superblock": true, 00:25:18.491 "num_base_bdevs": 3, 00:25:18.491 "num_base_bdevs_discovered": 3, 00:25:18.491 "num_base_bdevs_operational": 3, 00:25:18.491 "base_bdevs_list": [ 00:25:18.491 { 00:25:18.491 "name": "pt1", 00:25:18.491 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:18.491 "is_configured": true, 00:25:18.491 "data_offset": 2048, 00:25:18.491 "data_size": 63488 00:25:18.491 }, 00:25:18.491 { 00:25:18.491 "name": "pt2", 00:25:18.491 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:18.491 "is_configured": true, 00:25:18.491 "data_offset": 2048, 00:25:18.491 "data_size": 63488 00:25:18.491 }, 00:25:18.491 { 00:25:18.491 "name": "pt3", 00:25:18.491 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:18.491 "is_configured": true, 00:25:18.491 "data_offset": 2048, 00:25:18.491 "data_size": 63488 00:25:18.491 } 00:25:18.491 ] 00:25:18.491 } 00:25:18.491 } 00:25:18.491 }' 00:25:18.491 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:18.491 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:18.491 pt2 00:25:18.491 pt3' 00:25:18.491 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:18.491 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:18.491 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:18.491 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:18.491 12:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.491 12:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.491 12:57:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:18.491 12:57:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.491 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:18.491 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:18.491 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:18.491 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:18.491 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.491 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.491 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:18.491 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.491 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:18.491 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:18.491 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:18.491 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:18.491 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:25:18.491 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.491 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.491 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:25:18.750 [2024-12-05 12:57:01.087812] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=90856cbc-b196-4bc0-aee9-8949ea66174c 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 90856cbc-b196-4bc0-aee9-8949ea66174c ']' 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.750 [2024-12-05 12:57:01.119611] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:18.750 [2024-12-05 12:57:01.119635] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:18.750 [2024-12-05 12:57:01.119699] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:18.750 [2024-12-05 12:57:01.119775] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:18.750 [2024-12-05 12:57:01.119785] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:18.750 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.751 [2024-12-05 12:57:01.219689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:18.751 [2024-12-05 12:57:01.221559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:18.751 [2024-12-05 12:57:01.221707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:25:18.751 [2024-12-05 12:57:01.221762] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:18.751 [2024-12-05 12:57:01.221825] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:18.751 [2024-12-05 12:57:01.221856] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:25:18.751 [2024-12-05 12:57:01.221882] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:18.751 [2024-12-05 12:57:01.221895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:25:18.751 request: 00:25:18.751 { 00:25:18.751 "name": "raid_bdev1", 00:25:18.751 "raid_level": "raid5f", 00:25:18.751 "base_bdevs": [ 00:25:18.751 "malloc1", 00:25:18.751 "malloc2", 00:25:18.751 "malloc3" 00:25:18.751 ], 00:25:18.751 "strip_size_kb": 64, 00:25:18.751 "superblock": false, 00:25:18.751 "method": "bdev_raid_create", 00:25:18.751 "req_id": 1 00:25:18.751 } 00:25:18.751 Got JSON-RPC error response 00:25:18.751 response: 00:25:18.751 { 00:25:18.751 "code": -17, 00:25:18.751 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:18.751 } 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.751 [2024-12-05 12:57:01.259661] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:18.751 [2024-12-05 12:57:01.259708] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:18.751 [2024-12-05 12:57:01.259726] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:25:18.751 [2024-12-05 12:57:01.259735] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:18.751 [2024-12-05 12:57:01.261910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:18.751 [2024-12-05 12:57:01.261942] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:18.751 [2024-12-05 12:57:01.262018] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:18.751 [2024-12-05 12:57:01.262063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:18.751 pt1 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.751 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:18.751 "name": "raid_bdev1", 00:25:18.751 "uuid": "90856cbc-b196-4bc0-aee9-8949ea66174c", 00:25:18.751 "strip_size_kb": 64, 00:25:18.751 "state": "configuring", 00:25:18.751 "raid_level": "raid5f", 00:25:18.751 "superblock": true, 00:25:18.751 "num_base_bdevs": 3, 00:25:18.751 "num_base_bdevs_discovered": 1, 00:25:18.751 "num_base_bdevs_operational": 3, 00:25:18.751 "base_bdevs_list": [ 00:25:18.751 { 00:25:18.751 "name": "pt1", 00:25:18.751 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:18.751 "is_configured": true, 00:25:18.751 "data_offset": 2048, 00:25:18.751 "data_size": 63488 00:25:18.751 }, 00:25:18.751 { 00:25:18.751 "name": null, 00:25:18.751 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:18.751 "is_configured": false, 00:25:18.751 "data_offset": 2048, 00:25:18.751 "data_size": 63488 00:25:18.751 }, 00:25:18.751 { 00:25:18.751 "name": null, 00:25:18.751 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:18.751 "is_configured": false, 00:25:18.751 "data_offset": 2048, 00:25:18.751 "data_size": 63488 00:25:18.752 } 00:25:18.752 ] 00:25:18.752 }' 00:25:18.752 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:18.752 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:19.010 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:25:19.010 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:19.010 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.010 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:19.010 [2024-12-05 12:57:01.579767] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:19.010 [2024-12-05 12:57:01.579825] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:19.010 [2024-12-05 12:57:01.579846] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:25:19.010 [2024-12-05 12:57:01.579856] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:19.010 [2024-12-05 12:57:01.580264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:19.010 [2024-12-05 12:57:01.580284] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:19.010 [2024-12-05 12:57:01.580360] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:19.010 [2024-12-05 12:57:01.580383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:19.010 pt2 00:25:19.010 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.010 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:25:19.010 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.010 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:19.010 [2024-12-05 12:57:01.587758] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:25:19.010 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.010 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:25:19.010 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:19.010 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:19.010 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:19.010 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:19.010 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:19.010 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:19.010 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:19.010 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:19.010 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:19.268 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:19.268 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:19.268 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.268 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:19.268 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.268 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:19.268 "name": "raid_bdev1", 00:25:19.268 "uuid": "90856cbc-b196-4bc0-aee9-8949ea66174c", 00:25:19.268 "strip_size_kb": 64, 00:25:19.268 "state": "configuring", 00:25:19.268 "raid_level": "raid5f", 00:25:19.268 "superblock": true, 00:25:19.268 "num_base_bdevs": 3, 00:25:19.268 "num_base_bdevs_discovered": 1, 00:25:19.268 "num_base_bdevs_operational": 3, 00:25:19.268 "base_bdevs_list": [ 00:25:19.268 { 00:25:19.268 "name": "pt1", 00:25:19.268 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:19.268 "is_configured": true, 00:25:19.268 "data_offset": 2048, 00:25:19.268 "data_size": 63488 00:25:19.268 }, 00:25:19.268 { 00:25:19.268 "name": null, 00:25:19.268 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:19.268 "is_configured": false, 00:25:19.268 "data_offset": 0, 00:25:19.268 "data_size": 63488 00:25:19.268 }, 00:25:19.268 { 00:25:19.268 "name": null, 00:25:19.268 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:19.268 "is_configured": false, 00:25:19.268 "data_offset": 2048, 00:25:19.268 "data_size": 63488 00:25:19.268 } 00:25:19.268 ] 00:25:19.268 }' 00:25:19.268 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:19.268 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:19.527 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:25:19.527 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:19.527 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:19.527 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.527 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:19.527 [2024-12-05 12:57:01.903827] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:19.527 [2024-12-05 12:57:01.903894] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:19.527 [2024-12-05 12:57:01.903911] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:25:19.527 [2024-12-05 12:57:01.903923] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:19.528 [2024-12-05 12:57:01.904356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:19.528 [2024-12-05 12:57:01.904378] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:19.528 [2024-12-05 12:57:01.904450] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:19.528 [2024-12-05 12:57:01.904471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:19.528 pt2 00:25:19.528 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.528 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:25:19.528 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:19.528 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:19.528 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.528 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:19.528 [2024-12-05 12:57:01.911821] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:19.528 [2024-12-05 12:57:01.911864] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:19.528 [2024-12-05 12:57:01.911879] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:25:19.528 [2024-12-05 12:57:01.911890] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:19.528 [2024-12-05 12:57:01.912278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:19.528 [2024-12-05 12:57:01.912328] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:19.528 [2024-12-05 12:57:01.912387] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:25:19.528 [2024-12-05 12:57:01.912407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:19.528 [2024-12-05 12:57:01.912542] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:19.528 [2024-12-05 12:57:01.912586] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:19.528 [2024-12-05 12:57:01.912821] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:19.528 [2024-12-05 12:57:01.916341] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:19.528 pt3 00:25:19.528 [2024-12-05 12:57:01.916458] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:25:19.528 [2024-12-05 12:57:01.916650] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:19.528 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.528 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:25:19.528 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:19.528 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:19.528 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:19.528 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:19.528 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:19.528 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:19.528 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:19.528 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:19.528 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:19.528 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:19.528 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:19.528 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:19.528 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.528 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:19.528 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:19.528 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.528 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:19.528 "name": "raid_bdev1", 00:25:19.528 "uuid": "90856cbc-b196-4bc0-aee9-8949ea66174c", 00:25:19.528 "strip_size_kb": 64, 00:25:19.528 "state": "online", 00:25:19.528 "raid_level": "raid5f", 00:25:19.528 "superblock": true, 00:25:19.528 "num_base_bdevs": 3, 00:25:19.528 "num_base_bdevs_discovered": 3, 00:25:19.528 "num_base_bdevs_operational": 3, 00:25:19.528 "base_bdevs_list": [ 00:25:19.528 { 00:25:19.528 "name": "pt1", 00:25:19.528 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:19.528 "is_configured": true, 00:25:19.528 "data_offset": 2048, 00:25:19.528 "data_size": 63488 00:25:19.528 }, 00:25:19.528 { 00:25:19.528 "name": "pt2", 00:25:19.528 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:19.528 "is_configured": true, 00:25:19.528 "data_offset": 2048, 00:25:19.528 "data_size": 63488 00:25:19.528 }, 00:25:19.528 { 00:25:19.528 "name": "pt3", 00:25:19.528 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:19.528 "is_configured": true, 00:25:19.528 "data_offset": 2048, 00:25:19.528 "data_size": 63488 00:25:19.528 } 00:25:19.528 ] 00:25:19.528 }' 00:25:19.528 12:57:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:19.528 12:57:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:19.787 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:25:19.787 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:19.787 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:19.787 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:19.787 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:19.787 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:19.787 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:19.787 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.787 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:19.787 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:19.787 [2024-12-05 12:57:02.212944] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:19.787 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.787 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:19.787 "name": "raid_bdev1", 00:25:19.787 "aliases": [ 00:25:19.787 "90856cbc-b196-4bc0-aee9-8949ea66174c" 00:25:19.787 ], 00:25:19.787 "product_name": "Raid Volume", 00:25:19.787 "block_size": 512, 00:25:19.787 "num_blocks": 126976, 00:25:19.787 "uuid": "90856cbc-b196-4bc0-aee9-8949ea66174c", 00:25:19.787 "assigned_rate_limits": { 00:25:19.787 "rw_ios_per_sec": 0, 00:25:19.787 "rw_mbytes_per_sec": 0, 00:25:19.787 "r_mbytes_per_sec": 0, 00:25:19.787 "w_mbytes_per_sec": 0 00:25:19.787 }, 00:25:19.787 "claimed": false, 00:25:19.787 "zoned": false, 00:25:19.787 "supported_io_types": { 00:25:19.787 "read": true, 00:25:19.787 "write": true, 00:25:19.787 "unmap": false, 00:25:19.787 "flush": false, 00:25:19.787 "reset": true, 00:25:19.787 "nvme_admin": false, 00:25:19.787 "nvme_io": false, 00:25:19.787 "nvme_io_md": false, 00:25:19.787 "write_zeroes": true, 00:25:19.787 "zcopy": false, 00:25:19.787 "get_zone_info": false, 00:25:19.787 "zone_management": false, 00:25:19.787 "zone_append": false, 00:25:19.787 "compare": false, 00:25:19.787 "compare_and_write": false, 00:25:19.787 "abort": false, 00:25:19.787 "seek_hole": false, 00:25:19.787 "seek_data": false, 00:25:19.787 "copy": false, 00:25:19.787 "nvme_iov_md": false 00:25:19.787 }, 00:25:19.787 "driver_specific": { 00:25:19.787 "raid": { 00:25:19.787 "uuid": "90856cbc-b196-4bc0-aee9-8949ea66174c", 00:25:19.787 "strip_size_kb": 64, 00:25:19.787 "state": "online", 00:25:19.787 "raid_level": "raid5f", 00:25:19.787 "superblock": true, 00:25:19.787 "num_base_bdevs": 3, 00:25:19.787 "num_base_bdevs_discovered": 3, 00:25:19.787 "num_base_bdevs_operational": 3, 00:25:19.787 "base_bdevs_list": [ 00:25:19.787 { 00:25:19.787 "name": "pt1", 00:25:19.787 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:19.787 "is_configured": true, 00:25:19.787 "data_offset": 2048, 00:25:19.787 "data_size": 63488 00:25:19.787 }, 00:25:19.787 { 00:25:19.787 "name": "pt2", 00:25:19.787 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:19.787 "is_configured": true, 00:25:19.787 "data_offset": 2048, 00:25:19.787 "data_size": 63488 00:25:19.787 }, 00:25:19.787 { 00:25:19.787 "name": "pt3", 00:25:19.787 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:19.787 "is_configured": true, 00:25:19.787 "data_offset": 2048, 00:25:19.787 "data_size": 63488 00:25:19.787 } 00:25:19.787 ] 00:25:19.787 } 00:25:19.787 } 00:25:19.787 }' 00:25:19.787 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:19.787 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:19.787 pt2 00:25:19.787 pt3' 00:25:19.787 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:19.787 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:19.787 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:19.787 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:19.787 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.787 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:19.787 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:19.787 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.787 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:19.787 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:19.787 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:19.787 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:19.787 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.787 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:19.787 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:19.787 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.787 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:19.787 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:19.787 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:19.787 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:25:19.787 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.787 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:19.787 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:20.047 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.047 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:20.047 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:20.047 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:25:20.047 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:20.047 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.047 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.047 [2024-12-05 12:57:02.404928] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:20.047 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.047 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 90856cbc-b196-4bc0-aee9-8949ea66174c '!=' 90856cbc-b196-4bc0-aee9-8949ea66174c ']' 00:25:20.047 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:25:20.047 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:20.047 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:25:20.047 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:25:20.047 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.047 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.047 [2024-12-05 12:57:02.428784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:25:20.047 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.047 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:25:20.047 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:20.047 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:20.047 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:20.047 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:20.047 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:20.047 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:20.047 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:20.047 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:20.047 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:20.047 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:20.047 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:20.047 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.047 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.047 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.047 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:20.047 "name": "raid_bdev1", 00:25:20.047 "uuid": "90856cbc-b196-4bc0-aee9-8949ea66174c", 00:25:20.047 "strip_size_kb": 64, 00:25:20.047 "state": "online", 00:25:20.047 "raid_level": "raid5f", 00:25:20.047 "superblock": true, 00:25:20.047 "num_base_bdevs": 3, 00:25:20.047 "num_base_bdevs_discovered": 2, 00:25:20.047 "num_base_bdevs_operational": 2, 00:25:20.047 "base_bdevs_list": [ 00:25:20.047 { 00:25:20.047 "name": null, 00:25:20.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:20.047 "is_configured": false, 00:25:20.047 "data_offset": 0, 00:25:20.047 "data_size": 63488 00:25:20.047 }, 00:25:20.047 { 00:25:20.047 "name": "pt2", 00:25:20.047 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:20.047 "is_configured": true, 00:25:20.047 "data_offset": 2048, 00:25:20.047 "data_size": 63488 00:25:20.047 }, 00:25:20.047 { 00:25:20.047 "name": "pt3", 00:25:20.047 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:20.047 "is_configured": true, 00:25:20.047 "data_offset": 2048, 00:25:20.047 "data_size": 63488 00:25:20.047 } 00:25:20.047 ] 00:25:20.047 }' 00:25:20.047 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:20.047 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.306 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:20.306 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.306 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.306 [2024-12-05 12:57:02.732818] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:20.306 [2024-12-05 12:57:02.732841] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:20.306 [2024-12-05 12:57:02.732900] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:20.306 [2024-12-05 12:57:02.732956] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:20.306 [2024-12-05 12:57:02.732968] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:25:20.306 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.306 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:20.306 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.306 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:25:20.306 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.306 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.306 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:25:20.306 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:25:20.306 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:25:20.306 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:25:20.306 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:25:20.306 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.306 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.306 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.306 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:25:20.306 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:25:20.306 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:25:20.306 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.306 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.306 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.306 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:25:20.306 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:25:20.306 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:25:20.306 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:25:20.306 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:20.306 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.306 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.306 [2024-12-05 12:57:02.792812] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:20.306 [2024-12-05 12:57:02.792860] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:20.306 [2024-12-05 12:57:02.792876] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:25:20.306 [2024-12-05 12:57:02.792886] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:20.306 [2024-12-05 12:57:02.795005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:20.306 [2024-12-05 12:57:02.795139] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:20.306 [2024-12-05 12:57:02.795218] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:20.306 [2024-12-05 12:57:02.795261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:20.306 pt2 00:25:20.306 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.306 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:25:20.306 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:20.306 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:20.306 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:20.306 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:20.306 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:20.307 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:20.307 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:20.307 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:20.307 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:20.307 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:20.307 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.307 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:20.307 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.307 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.307 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:20.307 "name": "raid_bdev1", 00:25:20.307 "uuid": "90856cbc-b196-4bc0-aee9-8949ea66174c", 00:25:20.307 "strip_size_kb": 64, 00:25:20.307 "state": "configuring", 00:25:20.307 "raid_level": "raid5f", 00:25:20.307 "superblock": true, 00:25:20.307 "num_base_bdevs": 3, 00:25:20.307 "num_base_bdevs_discovered": 1, 00:25:20.307 "num_base_bdevs_operational": 2, 00:25:20.307 "base_bdevs_list": [ 00:25:20.307 { 00:25:20.307 "name": null, 00:25:20.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:20.307 "is_configured": false, 00:25:20.307 "data_offset": 2048, 00:25:20.307 "data_size": 63488 00:25:20.307 }, 00:25:20.307 { 00:25:20.307 "name": "pt2", 00:25:20.307 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:20.307 "is_configured": true, 00:25:20.307 "data_offset": 2048, 00:25:20.307 "data_size": 63488 00:25:20.307 }, 00:25:20.307 { 00:25:20.307 "name": null, 00:25:20.307 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:20.307 "is_configured": false, 00:25:20.307 "data_offset": 2048, 00:25:20.307 "data_size": 63488 00:25:20.307 } 00:25:20.307 ] 00:25:20.307 }' 00:25:20.307 12:57:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:20.307 12:57:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.565 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:25:20.565 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:25:20.565 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:25:20.565 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:20.565 12:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.565 12:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.565 [2024-12-05 12:57:03.116912] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:20.565 [2024-12-05 12:57:03.116970] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:20.565 [2024-12-05 12:57:03.116989] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:25:20.565 [2024-12-05 12:57:03.116999] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:20.565 [2024-12-05 12:57:03.117416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:20.565 [2024-12-05 12:57:03.117444] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:20.565 [2024-12-05 12:57:03.117521] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:25:20.565 [2024-12-05 12:57:03.117558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:20.565 [2024-12-05 12:57:03.117663] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:25:20.565 [2024-12-05 12:57:03.117674] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:20.565 [2024-12-05 12:57:03.117903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:25:20.565 [2024-12-05 12:57:03.121365] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:25:20.565 pt3 00:25:20.565 [2024-12-05 12:57:03.121480] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:25:20.565 [2024-12-05 12:57:03.121741] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:20.565 12:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.565 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:25:20.565 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:20.565 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:20.565 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:20.565 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:20.565 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:20.565 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:20.565 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:20.565 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:20.566 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:20.566 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:20.566 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:20.566 12:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.566 12:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.566 12:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.824 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:20.824 "name": "raid_bdev1", 00:25:20.824 "uuid": "90856cbc-b196-4bc0-aee9-8949ea66174c", 00:25:20.824 "strip_size_kb": 64, 00:25:20.824 "state": "online", 00:25:20.824 "raid_level": "raid5f", 00:25:20.824 "superblock": true, 00:25:20.824 "num_base_bdevs": 3, 00:25:20.824 "num_base_bdevs_discovered": 2, 00:25:20.824 "num_base_bdevs_operational": 2, 00:25:20.824 "base_bdevs_list": [ 00:25:20.824 { 00:25:20.824 "name": null, 00:25:20.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:20.824 "is_configured": false, 00:25:20.824 "data_offset": 2048, 00:25:20.824 "data_size": 63488 00:25:20.824 }, 00:25:20.824 { 00:25:20.824 "name": "pt2", 00:25:20.824 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:20.824 "is_configured": true, 00:25:20.824 "data_offset": 2048, 00:25:20.824 "data_size": 63488 00:25:20.824 }, 00:25:20.824 { 00:25:20.824 "name": "pt3", 00:25:20.824 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:20.824 "is_configured": true, 00:25:20.824 "data_offset": 2048, 00:25:20.824 "data_size": 63488 00:25:20.824 } 00:25:20.824 ] 00:25:20.824 }' 00:25:20.824 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:20.824 12:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.082 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:21.082 12:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.082 12:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.082 [2024-12-05 12:57:03.449739] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:21.082 [2024-12-05 12:57:03.449768] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:21.082 [2024-12-05 12:57:03.449831] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:21.082 [2024-12-05 12:57:03.449893] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:21.082 [2024-12-05 12:57:03.449902] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:25:21.082 12:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.082 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:21.082 12:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.082 12:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.082 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:25:21.082 12:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.082 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:25:21.082 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:25:21.082 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:25:21.082 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:25:21.082 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:25:21.082 12:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.082 12:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.082 12:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.082 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:21.082 12:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.082 12:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.082 [2024-12-05 12:57:03.505766] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:21.082 [2024-12-05 12:57:03.505815] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:21.082 [2024-12-05 12:57:03.505832] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:25:21.082 [2024-12-05 12:57:03.505841] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:21.082 [2024-12-05 12:57:03.507999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:21.082 [2024-12-05 12:57:03.508033] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:21.082 [2024-12-05 12:57:03.508102] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:21.082 [2024-12-05 12:57:03.508144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:21.082 [2024-12-05 12:57:03.508284] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:25:21.082 [2024-12-05 12:57:03.508294] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:21.082 [2024-12-05 12:57:03.508310] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:25:21.082 [2024-12-05 12:57:03.508356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:21.082 pt1 00:25:21.082 12:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.082 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:25:21.083 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:25:21.083 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:21.083 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:21.083 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:21.083 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:21.083 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:21.083 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:21.083 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:21.083 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:21.083 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:21.083 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:21.083 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:21.083 12:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.083 12:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.083 12:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.083 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:21.083 "name": "raid_bdev1", 00:25:21.083 "uuid": "90856cbc-b196-4bc0-aee9-8949ea66174c", 00:25:21.083 "strip_size_kb": 64, 00:25:21.083 "state": "configuring", 00:25:21.083 "raid_level": "raid5f", 00:25:21.083 "superblock": true, 00:25:21.083 "num_base_bdevs": 3, 00:25:21.083 "num_base_bdevs_discovered": 1, 00:25:21.083 "num_base_bdevs_operational": 2, 00:25:21.083 "base_bdevs_list": [ 00:25:21.083 { 00:25:21.083 "name": null, 00:25:21.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:21.083 "is_configured": false, 00:25:21.083 "data_offset": 2048, 00:25:21.083 "data_size": 63488 00:25:21.083 }, 00:25:21.083 { 00:25:21.083 "name": "pt2", 00:25:21.083 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:21.083 "is_configured": true, 00:25:21.083 "data_offset": 2048, 00:25:21.083 "data_size": 63488 00:25:21.083 }, 00:25:21.083 { 00:25:21.083 "name": null, 00:25:21.083 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:21.083 "is_configured": false, 00:25:21.083 "data_offset": 2048, 00:25:21.083 "data_size": 63488 00:25:21.083 } 00:25:21.083 ] 00:25:21.083 }' 00:25:21.083 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:21.083 12:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.356 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:25:21.356 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:25:21.356 12:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.356 12:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.356 12:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.356 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:25:21.356 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:21.356 12:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.356 12:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.356 [2024-12-05 12:57:03.869856] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:21.356 [2024-12-05 12:57:03.869910] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:21.356 [2024-12-05 12:57:03.869929] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:25:21.356 [2024-12-05 12:57:03.869937] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:21.356 [2024-12-05 12:57:03.870355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:21.356 [2024-12-05 12:57:03.870369] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:21.356 [2024-12-05 12:57:03.870433] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:25:21.356 [2024-12-05 12:57:03.870452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:21.356 [2024-12-05 12:57:03.870581] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:25:21.356 [2024-12-05 12:57:03.870591] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:21.356 [2024-12-05 12:57:03.870823] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:25:21.356 [2024-12-05 12:57:03.874421] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:25:21.356 [2024-12-05 12:57:03.874443] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:25:21.356 [2024-12-05 12:57:03.874667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:21.356 pt3 00:25:21.356 12:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.356 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:25:21.356 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:21.356 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:21.356 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:21.356 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:21.356 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:21.356 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:21.356 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:21.356 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:21.356 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:21.356 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:21.356 12:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.356 12:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.356 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:21.356 12:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.356 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:21.356 "name": "raid_bdev1", 00:25:21.356 "uuid": "90856cbc-b196-4bc0-aee9-8949ea66174c", 00:25:21.356 "strip_size_kb": 64, 00:25:21.356 "state": "online", 00:25:21.356 "raid_level": "raid5f", 00:25:21.356 "superblock": true, 00:25:21.356 "num_base_bdevs": 3, 00:25:21.356 "num_base_bdevs_discovered": 2, 00:25:21.356 "num_base_bdevs_operational": 2, 00:25:21.356 "base_bdevs_list": [ 00:25:21.356 { 00:25:21.356 "name": null, 00:25:21.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:21.356 "is_configured": false, 00:25:21.356 "data_offset": 2048, 00:25:21.356 "data_size": 63488 00:25:21.356 }, 00:25:21.356 { 00:25:21.356 "name": "pt2", 00:25:21.356 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:21.356 "is_configured": true, 00:25:21.356 "data_offset": 2048, 00:25:21.356 "data_size": 63488 00:25:21.356 }, 00:25:21.356 { 00:25:21.356 "name": "pt3", 00:25:21.356 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:21.356 "is_configured": true, 00:25:21.356 "data_offset": 2048, 00:25:21.356 "data_size": 63488 00:25:21.356 } 00:25:21.356 ] 00:25:21.356 }' 00:25:21.356 12:57:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:21.356 12:57:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.613 12:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:25:21.613 12:57:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.614 12:57:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.614 12:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:25:21.871 12:57:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.871 12:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:25:21.871 12:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:21.871 12:57:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.871 12:57:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.871 12:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:25:21.871 [2024-12-05 12:57:04.230886] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:21.871 12:57:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.871 12:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 90856cbc-b196-4bc0-aee9-8949ea66174c '!=' 90856cbc-b196-4bc0-aee9-8949ea66174c ']' 00:25:21.871 12:57:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 78632 00:25:21.871 12:57:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 78632 ']' 00:25:21.871 12:57:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 78632 00:25:21.871 12:57:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:25:21.871 12:57:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:21.871 12:57:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78632 00:25:21.871 killing process with pid 78632 00:25:21.871 12:57:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:21.871 12:57:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:21.871 12:57:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78632' 00:25:21.871 12:57:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 78632 00:25:21.871 [2024-12-05 12:57:04.283530] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:21.871 12:57:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 78632 00:25:21.871 [2024-12-05 12:57:04.283606] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:21.871 [2024-12-05 12:57:04.283664] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:21.871 [2024-12-05 12:57:04.283676] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:25:22.128 [2024-12-05 12:57:04.458505] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:22.693 12:57:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:25:22.693 00:25:22.693 real 0m5.516s 00:25:22.693 user 0m8.692s 00:25:22.693 sys 0m0.932s 00:25:22.693 12:57:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:22.693 12:57:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.693 ************************************ 00:25:22.693 END TEST raid5f_superblock_test 00:25:22.693 ************************************ 00:25:22.693 12:57:05 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:25:22.693 12:57:05 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:25:22.693 12:57:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:25:22.693 12:57:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:22.693 12:57:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:22.693 ************************************ 00:25:22.693 START TEST raid5f_rebuild_test 00:25:22.693 ************************************ 00:25:22.693 12:57:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:25:22.693 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:25:22.693 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:25:22.693 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:25:22.693 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:25:22.693 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:25:22.693 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:25:22.693 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:22.693 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:25:22.693 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:22.693 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:22.693 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:25:22.693 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:22.693 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:22.693 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:25:22.693 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:22.693 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:22.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:22.693 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:25:22.693 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:25:22.693 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:25:22.693 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:25:22.693 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:25:22.693 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:25:22.693 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:25:22.693 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:25:22.693 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:25:22.693 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:25:22.693 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:25:22.693 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:25:22.693 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=79048 00:25:22.693 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 79048 00:25:22.693 12:57:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 79048 ']' 00:25:22.693 12:57:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:22.693 12:57:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:22.693 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:22.693 12:57:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:22.693 12:57:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:22.693 12:57:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.693 [2024-12-05 12:57:05.141711] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:25:22.693 [2024-12-05 12:57:05.142007] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:25:22.693 Zero copy mechanism will not be used. 00:25:22.693 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79048 ] 00:25:22.950 [2024-12-05 12:57:05.299606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.950 [2024-12-05 12:57:05.401463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.207 [2024-12-05 12:57:05.537895] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:23.207 [2024-12-05 12:57:05.538075] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:23.464 12:57:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:23.464 12:57:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:25:23.464 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:23.464 12:57:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:23.464 12:57:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.464 12:57:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.464 BaseBdev1_malloc 00:25:23.464 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.464 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:23.464 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.464 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.464 [2024-12-05 12:57:06.035291] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:23.464 [2024-12-05 12:57:06.035527] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:23.464 [2024-12-05 12:57:06.035667] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:23.464 [2024-12-05 12:57:06.035762] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:23.464 [2024-12-05 12:57:06.038947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:23.464 [2024-12-05 12:57:06.039121] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:23.464 BaseBdev1 00:25:23.464 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.464 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:23.464 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:23.464 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.464 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.721 BaseBdev2_malloc 00:25:23.721 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.721 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:23.721 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.721 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.721 [2024-12-05 12:57:06.075548] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:23.721 [2024-12-05 12:57:06.075604] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:23.721 [2024-12-05 12:57:06.075627] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:23.721 [2024-12-05 12:57:06.075639] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:23.721 [2024-12-05 12:57:06.077793] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:23.721 [2024-12-05 12:57:06.077828] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:23.721 BaseBdev2 00:25:23.721 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.721 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:23.721 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:23.721 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.721 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.721 BaseBdev3_malloc 00:25:23.721 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.721 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:23.721 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.721 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.721 [2024-12-05 12:57:06.124995] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:23.721 [2024-12-05 12:57:06.125050] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:23.721 [2024-12-05 12:57:06.125072] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:23.721 [2024-12-05 12:57:06.125083] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:23.721 [2024-12-05 12:57:06.127164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:23.721 [2024-12-05 12:57:06.127214] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:23.721 BaseBdev3 00:25:23.721 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.721 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:25:23.721 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.721 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.721 spare_malloc 00:25:23.721 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.721 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:23.721 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.721 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.721 spare_delay 00:25:23.721 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.721 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:23.721 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.721 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.722 [2024-12-05 12:57:06.169080] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:23.722 [2024-12-05 12:57:06.169130] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:23.722 [2024-12-05 12:57:06.169146] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:25:23.722 [2024-12-05 12:57:06.169157] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:23.722 [2024-12-05 12:57:06.171257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:23.722 [2024-12-05 12:57:06.171296] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:23.722 spare 00:25:23.722 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.722 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:25:23.722 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.722 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.722 [2024-12-05 12:57:06.177142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:23.722 [2024-12-05 12:57:06.178952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:23.722 [2024-12-05 12:57:06.179018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:23.722 [2024-12-05 12:57:06.179093] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:23.722 [2024-12-05 12:57:06.179104] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:25:23.722 [2024-12-05 12:57:06.179354] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:23.722 [2024-12-05 12:57:06.183140] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:23.722 [2024-12-05 12:57:06.183160] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:23.722 [2024-12-05 12:57:06.183333] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:23.722 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.722 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:23.722 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:23.722 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:23.722 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:23.722 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:23.722 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:23.722 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:23.722 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:23.722 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:23.722 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:23.722 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:23.722 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.722 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.722 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:23.722 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.722 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:23.722 "name": "raid_bdev1", 00:25:23.722 "uuid": "c2f5cecc-8d86-4379-9b8b-8832e3ec2fe1", 00:25:23.722 "strip_size_kb": 64, 00:25:23.722 "state": "online", 00:25:23.722 "raid_level": "raid5f", 00:25:23.722 "superblock": false, 00:25:23.722 "num_base_bdevs": 3, 00:25:23.722 "num_base_bdevs_discovered": 3, 00:25:23.722 "num_base_bdevs_operational": 3, 00:25:23.722 "base_bdevs_list": [ 00:25:23.722 { 00:25:23.722 "name": "BaseBdev1", 00:25:23.722 "uuid": "613a52bd-b97f-5043-9785-6e281831edf7", 00:25:23.722 "is_configured": true, 00:25:23.722 "data_offset": 0, 00:25:23.722 "data_size": 65536 00:25:23.722 }, 00:25:23.722 { 00:25:23.722 "name": "BaseBdev2", 00:25:23.722 "uuid": "45dcda20-6507-58a7-bfa4-4a49aff71e55", 00:25:23.722 "is_configured": true, 00:25:23.722 "data_offset": 0, 00:25:23.722 "data_size": 65536 00:25:23.722 }, 00:25:23.722 { 00:25:23.722 "name": "BaseBdev3", 00:25:23.722 "uuid": "8a9237c1-8146-58f5-a28b-64621a0b2820", 00:25:23.722 "is_configured": true, 00:25:23.722 "data_offset": 0, 00:25:23.722 "data_size": 65536 00:25:23.722 } 00:25:23.722 ] 00:25:23.722 }' 00:25:23.722 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:23.722 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.978 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:25:23.978 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:23.978 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.978 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.978 [2024-12-05 12:57:06.495627] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:23.978 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.978 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:25:23.978 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:23.978 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.978 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.978 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:23.978 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.978 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:25:23.978 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:25:23.978 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:25:23.978 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:25:23.978 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:25:23.978 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:25:23.978 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:25:23.978 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:23.978 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:23.978 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:23.978 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:25:23.978 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:23.978 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:23.978 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:24.233 [2024-12-05 12:57:06.735517] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:25:24.233 /dev/nbd0 00:25:24.233 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:24.233 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:24.233 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:25:24.233 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:25:24.233 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:24.233 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:24.233 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:25:24.233 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:25:24.233 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:24.233 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:24.233 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:24.233 1+0 records in 00:25:24.233 1+0 records out 00:25:24.233 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257829 s, 15.9 MB/s 00:25:24.233 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:24.233 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:25:24.233 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:24.233 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:24.233 12:57:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:25:24.233 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:24.233 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:24.233 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:25:24.233 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:25:24.233 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:25:24.233 12:57:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:25:24.795 512+0 records in 00:25:24.795 512+0 records out 00:25:24.795 67108864 bytes (67 MB, 64 MiB) copied, 0.327946 s, 205 MB/s 00:25:24.795 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:25:24.795 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:25:24.795 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:24.795 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:24.795 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:25:24.795 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:24.795 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:25:24.795 [2024-12-05 12:57:07.301769] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:24.795 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:24.795 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:24.795 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:24.795 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:24.795 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:24.795 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:24.795 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:25:24.795 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:25:24.795 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:25:24.795 12:57:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.795 12:57:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:24.795 [2024-12-05 12:57:07.333872] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:24.795 12:57:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.795 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:25:24.795 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:24.795 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:24.795 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:24.795 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:24.795 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:24.795 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:24.795 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:24.795 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:24.795 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:24.795 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:24.795 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:24.795 12:57:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.795 12:57:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:24.795 12:57:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.795 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:24.795 "name": "raid_bdev1", 00:25:24.795 "uuid": "c2f5cecc-8d86-4379-9b8b-8832e3ec2fe1", 00:25:24.795 "strip_size_kb": 64, 00:25:24.795 "state": "online", 00:25:24.795 "raid_level": "raid5f", 00:25:24.795 "superblock": false, 00:25:24.795 "num_base_bdevs": 3, 00:25:24.795 "num_base_bdevs_discovered": 2, 00:25:24.795 "num_base_bdevs_operational": 2, 00:25:24.795 "base_bdevs_list": [ 00:25:24.795 { 00:25:24.795 "name": null, 00:25:24.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:24.795 "is_configured": false, 00:25:24.795 "data_offset": 0, 00:25:24.795 "data_size": 65536 00:25:24.795 }, 00:25:24.795 { 00:25:24.795 "name": "BaseBdev2", 00:25:24.795 "uuid": "45dcda20-6507-58a7-bfa4-4a49aff71e55", 00:25:24.795 "is_configured": true, 00:25:24.795 "data_offset": 0, 00:25:24.795 "data_size": 65536 00:25:24.795 }, 00:25:24.795 { 00:25:24.795 "name": "BaseBdev3", 00:25:24.795 "uuid": "8a9237c1-8146-58f5-a28b-64621a0b2820", 00:25:24.795 "is_configured": true, 00:25:24.795 "data_offset": 0, 00:25:24.795 "data_size": 65536 00:25:24.795 } 00:25:24.795 ] 00:25:24.795 }' 00:25:24.795 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:24.795 12:57:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.360 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:25.360 12:57:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.360 12:57:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.360 [2024-12-05 12:57:07.645949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:25.360 [2024-12-05 12:57:07.656857] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:25:25.360 12:57:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.360 12:57:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:25:25.360 [2024-12-05 12:57:07.662557] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:26.291 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:26.291 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:26.291 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:26.291 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:26.291 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:26.291 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:26.291 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:26.291 12:57:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.291 12:57:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.291 12:57:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.291 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:26.291 "name": "raid_bdev1", 00:25:26.291 "uuid": "c2f5cecc-8d86-4379-9b8b-8832e3ec2fe1", 00:25:26.291 "strip_size_kb": 64, 00:25:26.291 "state": "online", 00:25:26.291 "raid_level": "raid5f", 00:25:26.291 "superblock": false, 00:25:26.291 "num_base_bdevs": 3, 00:25:26.291 "num_base_bdevs_discovered": 3, 00:25:26.291 "num_base_bdevs_operational": 3, 00:25:26.291 "process": { 00:25:26.291 "type": "rebuild", 00:25:26.291 "target": "spare", 00:25:26.291 "progress": { 00:25:26.291 "blocks": 18432, 00:25:26.291 "percent": 14 00:25:26.291 } 00:25:26.291 }, 00:25:26.291 "base_bdevs_list": [ 00:25:26.291 { 00:25:26.291 "name": "spare", 00:25:26.291 "uuid": "3faba2e1-4c2f-58e3-a203-1af3ee9e9d0a", 00:25:26.291 "is_configured": true, 00:25:26.291 "data_offset": 0, 00:25:26.291 "data_size": 65536 00:25:26.291 }, 00:25:26.291 { 00:25:26.291 "name": "BaseBdev2", 00:25:26.291 "uuid": "45dcda20-6507-58a7-bfa4-4a49aff71e55", 00:25:26.291 "is_configured": true, 00:25:26.291 "data_offset": 0, 00:25:26.291 "data_size": 65536 00:25:26.291 }, 00:25:26.291 { 00:25:26.291 "name": "BaseBdev3", 00:25:26.291 "uuid": "8a9237c1-8146-58f5-a28b-64621a0b2820", 00:25:26.291 "is_configured": true, 00:25:26.291 "data_offset": 0, 00:25:26.291 "data_size": 65536 00:25:26.291 } 00:25:26.291 ] 00:25:26.291 }' 00:25:26.291 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:26.291 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:26.291 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:26.291 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:26.291 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:25:26.291 12:57:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.291 12:57:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.291 [2024-12-05 12:57:08.759780] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:26.291 [2024-12-05 12:57:08.773011] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:26.291 [2024-12-05 12:57:08.773070] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:26.291 [2024-12-05 12:57:08.773088] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:26.291 [2024-12-05 12:57:08.773097] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:26.291 12:57:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.292 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:25:26.292 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:26.292 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:26.292 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:26.292 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:26.292 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:26.292 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:26.292 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:26.292 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:26.292 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:26.292 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:26.292 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:26.292 12:57:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.292 12:57:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.292 12:57:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.292 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:26.292 "name": "raid_bdev1", 00:25:26.292 "uuid": "c2f5cecc-8d86-4379-9b8b-8832e3ec2fe1", 00:25:26.292 "strip_size_kb": 64, 00:25:26.292 "state": "online", 00:25:26.292 "raid_level": "raid5f", 00:25:26.292 "superblock": false, 00:25:26.292 "num_base_bdevs": 3, 00:25:26.292 "num_base_bdevs_discovered": 2, 00:25:26.292 "num_base_bdevs_operational": 2, 00:25:26.292 "base_bdevs_list": [ 00:25:26.292 { 00:25:26.292 "name": null, 00:25:26.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:26.292 "is_configured": false, 00:25:26.292 "data_offset": 0, 00:25:26.292 "data_size": 65536 00:25:26.292 }, 00:25:26.292 { 00:25:26.292 "name": "BaseBdev2", 00:25:26.292 "uuid": "45dcda20-6507-58a7-bfa4-4a49aff71e55", 00:25:26.292 "is_configured": true, 00:25:26.292 "data_offset": 0, 00:25:26.292 "data_size": 65536 00:25:26.292 }, 00:25:26.292 { 00:25:26.292 "name": "BaseBdev3", 00:25:26.292 "uuid": "8a9237c1-8146-58f5-a28b-64621a0b2820", 00:25:26.292 "is_configured": true, 00:25:26.292 "data_offset": 0, 00:25:26.292 "data_size": 65536 00:25:26.292 } 00:25:26.292 ] 00:25:26.292 }' 00:25:26.292 12:57:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:26.292 12:57:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.550 12:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:26.550 12:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:26.550 12:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:26.550 12:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:26.550 12:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:26.550 12:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:26.550 12:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.550 12:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:26.550 12:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.550 12:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.808 12:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:26.808 "name": "raid_bdev1", 00:25:26.808 "uuid": "c2f5cecc-8d86-4379-9b8b-8832e3ec2fe1", 00:25:26.808 "strip_size_kb": 64, 00:25:26.808 "state": "online", 00:25:26.808 "raid_level": "raid5f", 00:25:26.808 "superblock": false, 00:25:26.808 "num_base_bdevs": 3, 00:25:26.808 "num_base_bdevs_discovered": 2, 00:25:26.808 "num_base_bdevs_operational": 2, 00:25:26.808 "base_bdevs_list": [ 00:25:26.808 { 00:25:26.808 "name": null, 00:25:26.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:26.808 "is_configured": false, 00:25:26.808 "data_offset": 0, 00:25:26.808 "data_size": 65536 00:25:26.808 }, 00:25:26.808 { 00:25:26.808 "name": "BaseBdev2", 00:25:26.808 "uuid": "45dcda20-6507-58a7-bfa4-4a49aff71e55", 00:25:26.808 "is_configured": true, 00:25:26.808 "data_offset": 0, 00:25:26.808 "data_size": 65536 00:25:26.808 }, 00:25:26.808 { 00:25:26.808 "name": "BaseBdev3", 00:25:26.808 "uuid": "8a9237c1-8146-58f5-a28b-64621a0b2820", 00:25:26.808 "is_configured": true, 00:25:26.808 "data_offset": 0, 00:25:26.808 "data_size": 65536 00:25:26.808 } 00:25:26.808 ] 00:25:26.808 }' 00:25:26.808 12:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:26.808 12:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:26.808 12:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:26.808 12:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:26.808 12:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:26.808 12:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.808 12:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.808 [2024-12-05 12:57:09.207415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:26.808 [2024-12-05 12:57:09.217655] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:25:26.808 12:57:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.808 12:57:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:25:26.808 [2024-12-05 12:57:09.222944] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:27.742 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:27.742 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:27.742 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:27.742 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:27.742 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:27.742 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:27.742 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:27.742 12:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.742 12:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.742 12:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.742 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:27.742 "name": "raid_bdev1", 00:25:27.742 "uuid": "c2f5cecc-8d86-4379-9b8b-8832e3ec2fe1", 00:25:27.742 "strip_size_kb": 64, 00:25:27.742 "state": "online", 00:25:27.742 "raid_level": "raid5f", 00:25:27.742 "superblock": false, 00:25:27.742 "num_base_bdevs": 3, 00:25:27.742 "num_base_bdevs_discovered": 3, 00:25:27.742 "num_base_bdevs_operational": 3, 00:25:27.742 "process": { 00:25:27.742 "type": "rebuild", 00:25:27.742 "target": "spare", 00:25:27.742 "progress": { 00:25:27.742 "blocks": 18432, 00:25:27.742 "percent": 14 00:25:27.742 } 00:25:27.742 }, 00:25:27.742 "base_bdevs_list": [ 00:25:27.742 { 00:25:27.742 "name": "spare", 00:25:27.742 "uuid": "3faba2e1-4c2f-58e3-a203-1af3ee9e9d0a", 00:25:27.742 "is_configured": true, 00:25:27.742 "data_offset": 0, 00:25:27.742 "data_size": 65536 00:25:27.742 }, 00:25:27.742 { 00:25:27.742 "name": "BaseBdev2", 00:25:27.742 "uuid": "45dcda20-6507-58a7-bfa4-4a49aff71e55", 00:25:27.742 "is_configured": true, 00:25:27.742 "data_offset": 0, 00:25:27.742 "data_size": 65536 00:25:27.742 }, 00:25:27.742 { 00:25:27.742 "name": "BaseBdev3", 00:25:27.742 "uuid": "8a9237c1-8146-58f5-a28b-64621a0b2820", 00:25:27.742 "is_configured": true, 00:25:27.742 "data_offset": 0, 00:25:27.742 "data_size": 65536 00:25:27.742 } 00:25:27.742 ] 00:25:27.742 }' 00:25:27.742 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:27.742 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:27.742 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:27.742 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:27.742 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:25:27.742 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:25:27.742 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:25:27.742 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=423 00:25:27.742 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:27.742 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:27.742 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:27.742 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:27.743 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:27.743 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:27.743 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:27.743 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:27.743 12:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.743 12:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.001 12:57:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.001 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:28.001 "name": "raid_bdev1", 00:25:28.001 "uuid": "c2f5cecc-8d86-4379-9b8b-8832e3ec2fe1", 00:25:28.001 "strip_size_kb": 64, 00:25:28.001 "state": "online", 00:25:28.001 "raid_level": "raid5f", 00:25:28.001 "superblock": false, 00:25:28.001 "num_base_bdevs": 3, 00:25:28.001 "num_base_bdevs_discovered": 3, 00:25:28.001 "num_base_bdevs_operational": 3, 00:25:28.001 "process": { 00:25:28.001 "type": "rebuild", 00:25:28.001 "target": "spare", 00:25:28.001 "progress": { 00:25:28.001 "blocks": 20480, 00:25:28.001 "percent": 15 00:25:28.001 } 00:25:28.001 }, 00:25:28.001 "base_bdevs_list": [ 00:25:28.001 { 00:25:28.001 "name": "spare", 00:25:28.001 "uuid": "3faba2e1-4c2f-58e3-a203-1af3ee9e9d0a", 00:25:28.001 "is_configured": true, 00:25:28.001 "data_offset": 0, 00:25:28.001 "data_size": 65536 00:25:28.001 }, 00:25:28.001 { 00:25:28.001 "name": "BaseBdev2", 00:25:28.001 "uuid": "45dcda20-6507-58a7-bfa4-4a49aff71e55", 00:25:28.001 "is_configured": true, 00:25:28.001 "data_offset": 0, 00:25:28.001 "data_size": 65536 00:25:28.001 }, 00:25:28.001 { 00:25:28.001 "name": "BaseBdev3", 00:25:28.001 "uuid": "8a9237c1-8146-58f5-a28b-64621a0b2820", 00:25:28.001 "is_configured": true, 00:25:28.001 "data_offset": 0, 00:25:28.001 "data_size": 65536 00:25:28.001 } 00:25:28.001 ] 00:25:28.001 }' 00:25:28.001 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:28.001 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:28.001 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:28.001 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:28.001 12:57:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:28.935 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:28.935 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:28.935 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:28.935 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:28.935 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:28.935 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:28.935 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:28.935 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:28.935 12:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.935 12:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.935 12:57:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.935 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:28.935 "name": "raid_bdev1", 00:25:28.935 "uuid": "c2f5cecc-8d86-4379-9b8b-8832e3ec2fe1", 00:25:28.935 "strip_size_kb": 64, 00:25:28.935 "state": "online", 00:25:28.935 "raid_level": "raid5f", 00:25:28.935 "superblock": false, 00:25:28.935 "num_base_bdevs": 3, 00:25:28.935 "num_base_bdevs_discovered": 3, 00:25:28.935 "num_base_bdevs_operational": 3, 00:25:28.935 "process": { 00:25:28.935 "type": "rebuild", 00:25:28.935 "target": "spare", 00:25:28.935 "progress": { 00:25:28.935 "blocks": 43008, 00:25:28.935 "percent": 32 00:25:28.935 } 00:25:28.935 }, 00:25:28.935 "base_bdevs_list": [ 00:25:28.935 { 00:25:28.935 "name": "spare", 00:25:28.935 "uuid": "3faba2e1-4c2f-58e3-a203-1af3ee9e9d0a", 00:25:28.935 "is_configured": true, 00:25:28.935 "data_offset": 0, 00:25:28.935 "data_size": 65536 00:25:28.935 }, 00:25:28.935 { 00:25:28.935 "name": "BaseBdev2", 00:25:28.935 "uuid": "45dcda20-6507-58a7-bfa4-4a49aff71e55", 00:25:28.935 "is_configured": true, 00:25:28.935 "data_offset": 0, 00:25:28.935 "data_size": 65536 00:25:28.935 }, 00:25:28.935 { 00:25:28.935 "name": "BaseBdev3", 00:25:28.935 "uuid": "8a9237c1-8146-58f5-a28b-64621a0b2820", 00:25:28.935 "is_configured": true, 00:25:28.935 "data_offset": 0, 00:25:28.935 "data_size": 65536 00:25:28.935 } 00:25:28.935 ] 00:25:28.935 }' 00:25:28.935 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:28.935 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:28.935 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:28.935 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:28.935 12:57:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:30.307 12:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:30.307 12:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:30.307 12:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:30.307 12:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:30.307 12:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:30.307 12:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:30.307 12:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:30.307 12:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:30.307 12:57:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.307 12:57:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:30.307 12:57:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.307 12:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:30.307 "name": "raid_bdev1", 00:25:30.307 "uuid": "c2f5cecc-8d86-4379-9b8b-8832e3ec2fe1", 00:25:30.307 "strip_size_kb": 64, 00:25:30.307 "state": "online", 00:25:30.307 "raid_level": "raid5f", 00:25:30.307 "superblock": false, 00:25:30.307 "num_base_bdevs": 3, 00:25:30.307 "num_base_bdevs_discovered": 3, 00:25:30.307 "num_base_bdevs_operational": 3, 00:25:30.307 "process": { 00:25:30.307 "type": "rebuild", 00:25:30.307 "target": "spare", 00:25:30.307 "progress": { 00:25:30.307 "blocks": 65536, 00:25:30.307 "percent": 50 00:25:30.307 } 00:25:30.307 }, 00:25:30.307 "base_bdevs_list": [ 00:25:30.307 { 00:25:30.307 "name": "spare", 00:25:30.307 "uuid": "3faba2e1-4c2f-58e3-a203-1af3ee9e9d0a", 00:25:30.307 "is_configured": true, 00:25:30.307 "data_offset": 0, 00:25:30.307 "data_size": 65536 00:25:30.307 }, 00:25:30.307 { 00:25:30.307 "name": "BaseBdev2", 00:25:30.307 "uuid": "45dcda20-6507-58a7-bfa4-4a49aff71e55", 00:25:30.307 "is_configured": true, 00:25:30.307 "data_offset": 0, 00:25:30.307 "data_size": 65536 00:25:30.307 }, 00:25:30.307 { 00:25:30.307 "name": "BaseBdev3", 00:25:30.307 "uuid": "8a9237c1-8146-58f5-a28b-64621a0b2820", 00:25:30.307 "is_configured": true, 00:25:30.307 "data_offset": 0, 00:25:30.307 "data_size": 65536 00:25:30.307 } 00:25:30.307 ] 00:25:30.307 }' 00:25:30.307 12:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:30.307 12:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:30.307 12:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:30.307 12:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:30.307 12:57:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:31.243 12:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:31.243 12:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:31.243 12:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:31.243 12:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:31.243 12:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:31.243 12:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:31.243 12:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:31.243 12:57:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.243 12:57:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.243 12:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:31.243 12:57:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.243 12:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:31.243 "name": "raid_bdev1", 00:25:31.243 "uuid": "c2f5cecc-8d86-4379-9b8b-8832e3ec2fe1", 00:25:31.243 "strip_size_kb": 64, 00:25:31.243 "state": "online", 00:25:31.243 "raid_level": "raid5f", 00:25:31.243 "superblock": false, 00:25:31.243 "num_base_bdevs": 3, 00:25:31.243 "num_base_bdevs_discovered": 3, 00:25:31.243 "num_base_bdevs_operational": 3, 00:25:31.243 "process": { 00:25:31.243 "type": "rebuild", 00:25:31.243 "target": "spare", 00:25:31.243 "progress": { 00:25:31.243 "blocks": 88064, 00:25:31.243 "percent": 67 00:25:31.243 } 00:25:31.243 }, 00:25:31.243 "base_bdevs_list": [ 00:25:31.243 { 00:25:31.243 "name": "spare", 00:25:31.243 "uuid": "3faba2e1-4c2f-58e3-a203-1af3ee9e9d0a", 00:25:31.243 "is_configured": true, 00:25:31.243 "data_offset": 0, 00:25:31.243 "data_size": 65536 00:25:31.243 }, 00:25:31.243 { 00:25:31.243 "name": "BaseBdev2", 00:25:31.243 "uuid": "45dcda20-6507-58a7-bfa4-4a49aff71e55", 00:25:31.243 "is_configured": true, 00:25:31.243 "data_offset": 0, 00:25:31.243 "data_size": 65536 00:25:31.243 }, 00:25:31.243 { 00:25:31.243 "name": "BaseBdev3", 00:25:31.243 "uuid": "8a9237c1-8146-58f5-a28b-64621a0b2820", 00:25:31.243 "is_configured": true, 00:25:31.243 "data_offset": 0, 00:25:31.243 "data_size": 65536 00:25:31.243 } 00:25:31.243 ] 00:25:31.243 }' 00:25:31.243 12:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:31.244 12:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:31.244 12:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:31.244 12:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:31.244 12:57:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:32.177 12:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:32.177 12:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:32.177 12:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:32.177 12:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:32.177 12:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:32.177 12:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:32.177 12:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:32.177 12:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:32.177 12:57:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.177 12:57:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:32.177 12:57:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.177 12:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:32.177 "name": "raid_bdev1", 00:25:32.177 "uuid": "c2f5cecc-8d86-4379-9b8b-8832e3ec2fe1", 00:25:32.177 "strip_size_kb": 64, 00:25:32.177 "state": "online", 00:25:32.177 "raid_level": "raid5f", 00:25:32.177 "superblock": false, 00:25:32.177 "num_base_bdevs": 3, 00:25:32.177 "num_base_bdevs_discovered": 3, 00:25:32.177 "num_base_bdevs_operational": 3, 00:25:32.177 "process": { 00:25:32.177 "type": "rebuild", 00:25:32.177 "target": "spare", 00:25:32.177 "progress": { 00:25:32.177 "blocks": 110592, 00:25:32.177 "percent": 84 00:25:32.177 } 00:25:32.177 }, 00:25:32.178 "base_bdevs_list": [ 00:25:32.178 { 00:25:32.178 "name": "spare", 00:25:32.178 "uuid": "3faba2e1-4c2f-58e3-a203-1af3ee9e9d0a", 00:25:32.178 "is_configured": true, 00:25:32.178 "data_offset": 0, 00:25:32.178 "data_size": 65536 00:25:32.178 }, 00:25:32.178 { 00:25:32.178 "name": "BaseBdev2", 00:25:32.178 "uuid": "45dcda20-6507-58a7-bfa4-4a49aff71e55", 00:25:32.178 "is_configured": true, 00:25:32.178 "data_offset": 0, 00:25:32.178 "data_size": 65536 00:25:32.178 }, 00:25:32.178 { 00:25:32.178 "name": "BaseBdev3", 00:25:32.178 "uuid": "8a9237c1-8146-58f5-a28b-64621a0b2820", 00:25:32.178 "is_configured": true, 00:25:32.178 "data_offset": 0, 00:25:32.178 "data_size": 65536 00:25:32.178 } 00:25:32.178 ] 00:25:32.178 }' 00:25:32.178 12:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:32.437 12:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:32.437 12:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:32.437 12:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:32.437 12:57:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:33.384 [2024-12-05 12:57:15.676981] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:33.384 [2024-12-05 12:57:15.677048] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:33.384 [2024-12-05 12:57:15.677084] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:33.384 12:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:33.384 12:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:33.384 12:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:33.384 12:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:33.384 12:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:33.384 12:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:33.384 12:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:33.384 12:57:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.384 12:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:33.384 12:57:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.384 12:57:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.384 12:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:33.384 "name": "raid_bdev1", 00:25:33.384 "uuid": "c2f5cecc-8d86-4379-9b8b-8832e3ec2fe1", 00:25:33.384 "strip_size_kb": 64, 00:25:33.384 "state": "online", 00:25:33.384 "raid_level": "raid5f", 00:25:33.384 "superblock": false, 00:25:33.384 "num_base_bdevs": 3, 00:25:33.384 "num_base_bdevs_discovered": 3, 00:25:33.384 "num_base_bdevs_operational": 3, 00:25:33.384 "base_bdevs_list": [ 00:25:33.384 { 00:25:33.384 "name": "spare", 00:25:33.384 "uuid": "3faba2e1-4c2f-58e3-a203-1af3ee9e9d0a", 00:25:33.384 "is_configured": true, 00:25:33.384 "data_offset": 0, 00:25:33.384 "data_size": 65536 00:25:33.384 }, 00:25:33.384 { 00:25:33.384 "name": "BaseBdev2", 00:25:33.384 "uuid": "45dcda20-6507-58a7-bfa4-4a49aff71e55", 00:25:33.384 "is_configured": true, 00:25:33.384 "data_offset": 0, 00:25:33.384 "data_size": 65536 00:25:33.384 }, 00:25:33.384 { 00:25:33.384 "name": "BaseBdev3", 00:25:33.384 "uuid": "8a9237c1-8146-58f5-a28b-64621a0b2820", 00:25:33.384 "is_configured": true, 00:25:33.384 "data_offset": 0, 00:25:33.384 "data_size": 65536 00:25:33.384 } 00:25:33.384 ] 00:25:33.384 }' 00:25:33.384 12:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:33.384 12:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:33.384 12:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:33.384 12:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:25:33.384 12:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:25:33.384 12:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:33.384 12:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:33.384 12:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:33.384 12:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:33.384 12:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:33.384 12:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:33.384 12:57:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.384 12:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:33.384 12:57:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.384 12:57:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.384 12:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:33.384 "name": "raid_bdev1", 00:25:33.384 "uuid": "c2f5cecc-8d86-4379-9b8b-8832e3ec2fe1", 00:25:33.384 "strip_size_kb": 64, 00:25:33.384 "state": "online", 00:25:33.384 "raid_level": "raid5f", 00:25:33.384 "superblock": false, 00:25:33.384 "num_base_bdevs": 3, 00:25:33.384 "num_base_bdevs_discovered": 3, 00:25:33.384 "num_base_bdevs_operational": 3, 00:25:33.384 "base_bdevs_list": [ 00:25:33.384 { 00:25:33.384 "name": "spare", 00:25:33.384 "uuid": "3faba2e1-4c2f-58e3-a203-1af3ee9e9d0a", 00:25:33.384 "is_configured": true, 00:25:33.384 "data_offset": 0, 00:25:33.384 "data_size": 65536 00:25:33.384 }, 00:25:33.384 { 00:25:33.384 "name": "BaseBdev2", 00:25:33.384 "uuid": "45dcda20-6507-58a7-bfa4-4a49aff71e55", 00:25:33.384 "is_configured": true, 00:25:33.384 "data_offset": 0, 00:25:33.384 "data_size": 65536 00:25:33.384 }, 00:25:33.384 { 00:25:33.384 "name": "BaseBdev3", 00:25:33.384 "uuid": "8a9237c1-8146-58f5-a28b-64621a0b2820", 00:25:33.384 "is_configured": true, 00:25:33.384 "data_offset": 0, 00:25:33.384 "data_size": 65536 00:25:33.384 } 00:25:33.384 ] 00:25:33.384 }' 00:25:33.384 12:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:33.384 12:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:33.641 12:57:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:33.641 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:33.641 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:33.641 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:33.641 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:33.641 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:33.641 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:33.641 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:33.641 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:33.641 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:33.641 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:33.641 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:33.641 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:33.641 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:33.641 12:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.641 12:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.641 12:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.641 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:33.641 "name": "raid_bdev1", 00:25:33.641 "uuid": "c2f5cecc-8d86-4379-9b8b-8832e3ec2fe1", 00:25:33.641 "strip_size_kb": 64, 00:25:33.641 "state": "online", 00:25:33.641 "raid_level": "raid5f", 00:25:33.641 "superblock": false, 00:25:33.641 "num_base_bdevs": 3, 00:25:33.641 "num_base_bdevs_discovered": 3, 00:25:33.641 "num_base_bdevs_operational": 3, 00:25:33.641 "base_bdevs_list": [ 00:25:33.641 { 00:25:33.641 "name": "spare", 00:25:33.641 "uuid": "3faba2e1-4c2f-58e3-a203-1af3ee9e9d0a", 00:25:33.641 "is_configured": true, 00:25:33.641 "data_offset": 0, 00:25:33.641 "data_size": 65536 00:25:33.641 }, 00:25:33.641 { 00:25:33.641 "name": "BaseBdev2", 00:25:33.641 "uuid": "45dcda20-6507-58a7-bfa4-4a49aff71e55", 00:25:33.641 "is_configured": true, 00:25:33.641 "data_offset": 0, 00:25:33.641 "data_size": 65536 00:25:33.641 }, 00:25:33.641 { 00:25:33.641 "name": "BaseBdev3", 00:25:33.641 "uuid": "8a9237c1-8146-58f5-a28b-64621a0b2820", 00:25:33.641 "is_configured": true, 00:25:33.641 "data_offset": 0, 00:25:33.641 "data_size": 65536 00:25:33.641 } 00:25:33.641 ] 00:25:33.641 }' 00:25:33.641 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:33.641 12:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.898 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:33.898 12:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.898 12:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.898 [2024-12-05 12:57:16.327380] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:33.898 [2024-12-05 12:57:16.327405] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:33.898 [2024-12-05 12:57:16.327470] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:33.898 [2024-12-05 12:57:16.327556] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:33.898 [2024-12-05 12:57:16.327570] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:33.898 12:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.898 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:33.898 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:25:33.898 12:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.898 12:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.898 12:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.898 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:25:33.898 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:25:33.898 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:25:33.898 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:33.898 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:25:33.898 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:33.898 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:33.898 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:33.898 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:33.898 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:25:33.898 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:33.898 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:33.898 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:34.156 /dev/nbd0 00:25:34.156 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:34.156 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:34.156 12:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:25:34.156 12:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:25:34.156 12:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:34.156 12:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:34.156 12:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:25:34.156 12:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:25:34.156 12:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:34.156 12:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:34.156 12:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:34.156 1+0 records in 00:25:34.156 1+0 records out 00:25:34.156 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266818 s, 15.4 MB/s 00:25:34.156 12:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:34.156 12:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:25:34.156 12:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:34.156 12:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:34.156 12:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:25:34.156 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:34.156 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:34.156 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:25:34.415 /dev/nbd1 00:25:34.415 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:34.415 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:34.415 12:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:25:34.415 12:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:25:34.415 12:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:34.415 12:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:34.415 12:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:25:34.415 12:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:25:34.415 12:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:34.415 12:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:34.415 12:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:34.415 1+0 records in 00:25:34.415 1+0 records out 00:25:34.415 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357718 s, 11.5 MB/s 00:25:34.415 12:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:34.415 12:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:25:34.415 12:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:34.415 12:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:34.415 12:57:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:25:34.415 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:34.415 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:34.415 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:25:34.415 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:25:34.415 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:25:34.415 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:34.415 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:34.415 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:25:34.415 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:34.415 12:57:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:25:34.673 12:57:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:34.673 12:57:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:34.673 12:57:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:34.673 12:57:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:34.673 12:57:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:34.673 12:57:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:34.673 12:57:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:25:34.673 12:57:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:25:34.673 12:57:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:34.673 12:57:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:25:34.930 12:57:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:34.930 12:57:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:34.930 12:57:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:34.930 12:57:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:34.930 12:57:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:34.930 12:57:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:34.930 12:57:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:25:34.930 12:57:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:25:34.930 12:57:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:25:34.930 12:57:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 79048 00:25:34.930 12:57:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 79048 ']' 00:25:34.930 12:57:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 79048 00:25:34.930 12:57:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:25:34.930 12:57:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:34.930 12:57:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79048 00:25:34.930 killing process with pid 79048 00:25:34.930 Received shutdown signal, test time was about 60.000000 seconds 00:25:34.930 00:25:34.930 Latency(us) 00:25:34.930 [2024-12-05T12:57:17.517Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:34.930 [2024-12-05T12:57:17.517Z] =================================================================================================================== 00:25:34.930 [2024-12-05T12:57:17.517Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:34.931 12:57:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:34.931 12:57:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:34.931 12:57:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79048' 00:25:34.931 12:57:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 79048 00:25:34.931 [2024-12-05 12:57:17.422630] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:34.931 12:57:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 79048 00:25:35.188 [2024-12-05 12:57:17.614951] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:25:35.753 00:25:35.753 real 0m13.125s 00:25:35.753 user 0m15.837s 00:25:35.753 sys 0m1.488s 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:35.753 ************************************ 00:25:35.753 END TEST raid5f_rebuild_test 00:25:35.753 ************************************ 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.753 12:57:18 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:25:35.753 12:57:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:25:35.753 12:57:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:35.753 12:57:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:35.753 ************************************ 00:25:35.753 START TEST raid5f_rebuild_test_sb 00:25:35.753 ************************************ 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=79467 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 79467 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 79467 ']' 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:35.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:35.753 12:57:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:35.753 [2024-12-05 12:57:18.300143] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:25:35.753 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:35.754 Zero copy mechanism will not be used. 00:25:35.754 [2024-12-05 12:57:18.300362] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79467 ] 00:25:36.011 [2024-12-05 12:57:18.451050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.011 [2024-12-05 12:57:18.537291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.268 [2024-12-05 12:57:18.649473] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:36.268 [2024-12-05 12:57:18.649505] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:36.833 BaseBdev1_malloc 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:36.833 [2024-12-05 12:57:19.177889] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:36.833 [2024-12-05 12:57:19.178091] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:36.833 [2024-12-05 12:57:19.178115] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:36.833 [2024-12-05 12:57:19.178125] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:36.833 [2024-12-05 12:57:19.179936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:36.833 [2024-12-05 12:57:19.179969] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:36.833 BaseBdev1 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:36.833 BaseBdev2_malloc 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:36.833 [2024-12-05 12:57:19.209972] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:36.833 [2024-12-05 12:57:19.210023] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:36.833 [2024-12-05 12:57:19.210040] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:36.833 [2024-12-05 12:57:19.210049] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:36.833 [2024-12-05 12:57:19.211807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:36.833 [2024-12-05 12:57:19.211838] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:36.833 BaseBdev2 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:36.833 BaseBdev3_malloc 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:36.833 [2024-12-05 12:57:19.255847] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:36.833 [2024-12-05 12:57:19.255894] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:36.833 [2024-12-05 12:57:19.255912] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:36.833 [2024-12-05 12:57:19.255921] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:36.833 [2024-12-05 12:57:19.257690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:36.833 [2024-12-05 12:57:19.257723] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:36.833 BaseBdev3 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:36.833 spare_malloc 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:36.833 spare_delay 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:36.833 [2024-12-05 12:57:19.295904] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:36.833 [2024-12-05 12:57:19.295948] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:36.833 [2024-12-05 12:57:19.295961] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:25:36.833 [2024-12-05 12:57:19.295970] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:36.833 [2024-12-05 12:57:19.297750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:36.833 [2024-12-05 12:57:19.297782] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:36.833 spare 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.833 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:36.833 [2024-12-05 12:57:19.303968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:36.834 [2024-12-05 12:57:19.305538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:36.834 [2024-12-05 12:57:19.305594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:36.834 [2024-12-05 12:57:19.305731] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:36.834 [2024-12-05 12:57:19.305740] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:36.834 [2024-12-05 12:57:19.305945] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:36.834 [2024-12-05 12:57:19.309103] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:36.834 [2024-12-05 12:57:19.309215] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:36.834 [2024-12-05 12:57:19.309383] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:36.834 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.834 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:36.834 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:36.834 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:36.834 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:36.834 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:36.834 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:36.834 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:36.834 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:36.834 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:36.834 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:36.834 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:36.834 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.834 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:36.834 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:36.834 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.834 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:36.834 "name": "raid_bdev1", 00:25:36.834 "uuid": "3cd5786a-e0eb-4e12-87d1-4dd3649c92e8", 00:25:36.834 "strip_size_kb": 64, 00:25:36.834 "state": "online", 00:25:36.834 "raid_level": "raid5f", 00:25:36.834 "superblock": true, 00:25:36.834 "num_base_bdevs": 3, 00:25:36.834 "num_base_bdevs_discovered": 3, 00:25:36.834 "num_base_bdevs_operational": 3, 00:25:36.834 "base_bdevs_list": [ 00:25:36.834 { 00:25:36.834 "name": "BaseBdev1", 00:25:36.834 "uuid": "723543c7-4da9-5f3c-b297-c0e01f2345e0", 00:25:36.834 "is_configured": true, 00:25:36.834 "data_offset": 2048, 00:25:36.834 "data_size": 63488 00:25:36.834 }, 00:25:36.834 { 00:25:36.834 "name": "BaseBdev2", 00:25:36.834 "uuid": "e6d40a14-3257-5cc1-81e6-6c273a3d4659", 00:25:36.834 "is_configured": true, 00:25:36.834 "data_offset": 2048, 00:25:36.834 "data_size": 63488 00:25:36.834 }, 00:25:36.834 { 00:25:36.834 "name": "BaseBdev3", 00:25:36.834 "uuid": "4faffc79-2ece-5149-a73c-2a0cb14f0b8a", 00:25:36.834 "is_configured": true, 00:25:36.834 "data_offset": 2048, 00:25:36.834 "data_size": 63488 00:25:36.834 } 00:25:36.834 ] 00:25:36.834 }' 00:25:36.834 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:36.834 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:37.101 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:25:37.101 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:37.101 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.101 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:37.101 [2024-12-05 12:57:19.629700] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:37.101 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.101 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:25:37.101 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:37.101 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:37.101 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.101 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:37.101 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.359 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:25:37.359 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:25:37.359 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:25:37.359 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:25:37.359 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:25:37.359 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:25:37.359 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:25:37.359 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:37.359 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:37.359 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:37.359 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:25:37.359 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:37.359 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:37.359 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:37.359 [2024-12-05 12:57:19.877537] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:25:37.359 /dev/nbd0 00:25:37.359 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:37.359 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:37.359 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:25:37.359 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:25:37.359 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:37.359 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:37.359 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:25:37.359 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:25:37.359 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:37.359 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:37.359 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:37.359 1+0 records in 00:25:37.359 1+0 records out 00:25:37.359 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366155 s, 11.2 MB/s 00:25:37.359 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:37.359 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:25:37.359 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:37.359 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:37.359 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:25:37.359 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:37.359 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:37.359 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:25:37.359 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:25:37.359 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:25:37.359 12:57:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:25:37.925 496+0 records in 00:25:37.925 496+0 records out 00:25:37.925 65011712 bytes (65 MB, 62 MiB) copied, 0.331711 s, 196 MB/s 00:25:37.925 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:25:37.925 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:25:37.925 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:37.925 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:37.925 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:25:37.925 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:37.925 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:25:37.925 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:37.925 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:37.925 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:37.925 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:37.925 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:37.925 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:37.925 [2024-12-05 12:57:20.478412] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:37.925 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:25:37.925 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:25:37.925 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:25:37.925 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.925 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:37.925 [2024-12-05 12:57:20.485816] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:37.925 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.925 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:25:37.925 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:37.925 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:37.925 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:37.925 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:37.925 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:37.925 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:37.925 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:37.925 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:37.925 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:37.925 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:37.925 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:37.925 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.925 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:37.925 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.183 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:38.183 "name": "raid_bdev1", 00:25:38.183 "uuid": "3cd5786a-e0eb-4e12-87d1-4dd3649c92e8", 00:25:38.183 "strip_size_kb": 64, 00:25:38.183 "state": "online", 00:25:38.183 "raid_level": "raid5f", 00:25:38.183 "superblock": true, 00:25:38.183 "num_base_bdevs": 3, 00:25:38.183 "num_base_bdevs_discovered": 2, 00:25:38.183 "num_base_bdevs_operational": 2, 00:25:38.183 "base_bdevs_list": [ 00:25:38.183 { 00:25:38.183 "name": null, 00:25:38.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:38.183 "is_configured": false, 00:25:38.183 "data_offset": 0, 00:25:38.183 "data_size": 63488 00:25:38.183 }, 00:25:38.183 { 00:25:38.183 "name": "BaseBdev2", 00:25:38.183 "uuid": "e6d40a14-3257-5cc1-81e6-6c273a3d4659", 00:25:38.183 "is_configured": true, 00:25:38.183 "data_offset": 2048, 00:25:38.183 "data_size": 63488 00:25:38.183 }, 00:25:38.183 { 00:25:38.183 "name": "BaseBdev3", 00:25:38.183 "uuid": "4faffc79-2ece-5149-a73c-2a0cb14f0b8a", 00:25:38.183 "is_configured": true, 00:25:38.183 "data_offset": 2048, 00:25:38.183 "data_size": 63488 00:25:38.183 } 00:25:38.183 ] 00:25:38.183 }' 00:25:38.183 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:38.183 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:38.440 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:38.440 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.440 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:38.440 [2024-12-05 12:57:20.805890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:38.440 [2024-12-05 12:57:20.815007] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:25:38.440 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.440 12:57:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:25:38.440 [2024-12-05 12:57:20.819545] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:39.372 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:39.372 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:39.372 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:39.372 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:39.372 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:39.372 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:39.372 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:39.372 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.372 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:39.372 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.372 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:39.372 "name": "raid_bdev1", 00:25:39.372 "uuid": "3cd5786a-e0eb-4e12-87d1-4dd3649c92e8", 00:25:39.372 "strip_size_kb": 64, 00:25:39.372 "state": "online", 00:25:39.372 "raid_level": "raid5f", 00:25:39.372 "superblock": true, 00:25:39.372 "num_base_bdevs": 3, 00:25:39.372 "num_base_bdevs_discovered": 3, 00:25:39.372 "num_base_bdevs_operational": 3, 00:25:39.372 "process": { 00:25:39.372 "type": "rebuild", 00:25:39.372 "target": "spare", 00:25:39.372 "progress": { 00:25:39.372 "blocks": 20480, 00:25:39.372 "percent": 16 00:25:39.372 } 00:25:39.372 }, 00:25:39.372 "base_bdevs_list": [ 00:25:39.372 { 00:25:39.372 "name": "spare", 00:25:39.372 "uuid": "8ca7b0e0-378e-58b8-9743-6a85be6d2832", 00:25:39.372 "is_configured": true, 00:25:39.372 "data_offset": 2048, 00:25:39.372 "data_size": 63488 00:25:39.372 }, 00:25:39.372 { 00:25:39.372 "name": "BaseBdev2", 00:25:39.372 "uuid": "e6d40a14-3257-5cc1-81e6-6c273a3d4659", 00:25:39.372 "is_configured": true, 00:25:39.372 "data_offset": 2048, 00:25:39.372 "data_size": 63488 00:25:39.372 }, 00:25:39.372 { 00:25:39.372 "name": "BaseBdev3", 00:25:39.372 "uuid": "4faffc79-2ece-5149-a73c-2a0cb14f0b8a", 00:25:39.372 "is_configured": true, 00:25:39.372 "data_offset": 2048, 00:25:39.372 "data_size": 63488 00:25:39.372 } 00:25:39.372 ] 00:25:39.372 }' 00:25:39.372 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:39.372 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:39.372 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:39.372 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:39.372 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:25:39.372 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.372 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:39.372 [2024-12-05 12:57:21.928554] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:39.372 [2024-12-05 12:57:21.928803] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:39.372 [2024-12-05 12:57:21.928843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:39.372 [2024-12-05 12:57:21.928857] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:39.372 [2024-12-05 12:57:21.928863] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:39.372 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.372 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:25:39.372 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:39.372 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:39.372 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:39.372 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:39.372 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:39.372 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:39.372 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:39.372 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:39.372 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:39.372 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:39.372 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:39.372 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.372 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:39.630 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.630 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:39.630 "name": "raid_bdev1", 00:25:39.630 "uuid": "3cd5786a-e0eb-4e12-87d1-4dd3649c92e8", 00:25:39.630 "strip_size_kb": 64, 00:25:39.630 "state": "online", 00:25:39.630 "raid_level": "raid5f", 00:25:39.630 "superblock": true, 00:25:39.630 "num_base_bdevs": 3, 00:25:39.630 "num_base_bdevs_discovered": 2, 00:25:39.630 "num_base_bdevs_operational": 2, 00:25:39.630 "base_bdevs_list": [ 00:25:39.630 { 00:25:39.630 "name": null, 00:25:39.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:39.630 "is_configured": false, 00:25:39.630 "data_offset": 0, 00:25:39.630 "data_size": 63488 00:25:39.630 }, 00:25:39.630 { 00:25:39.630 "name": "BaseBdev2", 00:25:39.630 "uuid": "e6d40a14-3257-5cc1-81e6-6c273a3d4659", 00:25:39.630 "is_configured": true, 00:25:39.630 "data_offset": 2048, 00:25:39.630 "data_size": 63488 00:25:39.630 }, 00:25:39.630 { 00:25:39.630 "name": "BaseBdev3", 00:25:39.630 "uuid": "4faffc79-2ece-5149-a73c-2a0cb14f0b8a", 00:25:39.630 "is_configured": true, 00:25:39.630 "data_offset": 2048, 00:25:39.630 "data_size": 63488 00:25:39.630 } 00:25:39.630 ] 00:25:39.630 }' 00:25:39.630 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:39.630 12:57:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:39.888 12:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:39.888 12:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:39.888 12:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:39.888 12:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:39.888 12:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:39.889 12:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:39.889 12:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.889 12:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:39.889 12:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:39.889 12:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.889 12:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:39.889 "name": "raid_bdev1", 00:25:39.889 "uuid": "3cd5786a-e0eb-4e12-87d1-4dd3649c92e8", 00:25:39.889 "strip_size_kb": 64, 00:25:39.889 "state": "online", 00:25:39.889 "raid_level": "raid5f", 00:25:39.889 "superblock": true, 00:25:39.889 "num_base_bdevs": 3, 00:25:39.889 "num_base_bdevs_discovered": 2, 00:25:39.889 "num_base_bdevs_operational": 2, 00:25:39.889 "base_bdevs_list": [ 00:25:39.889 { 00:25:39.889 "name": null, 00:25:39.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:39.889 "is_configured": false, 00:25:39.889 "data_offset": 0, 00:25:39.889 "data_size": 63488 00:25:39.889 }, 00:25:39.889 { 00:25:39.889 "name": "BaseBdev2", 00:25:39.889 "uuid": "e6d40a14-3257-5cc1-81e6-6c273a3d4659", 00:25:39.889 "is_configured": true, 00:25:39.889 "data_offset": 2048, 00:25:39.889 "data_size": 63488 00:25:39.889 }, 00:25:39.889 { 00:25:39.889 "name": "BaseBdev3", 00:25:39.889 "uuid": "4faffc79-2ece-5149-a73c-2a0cb14f0b8a", 00:25:39.889 "is_configured": true, 00:25:39.889 "data_offset": 2048, 00:25:39.889 "data_size": 63488 00:25:39.889 } 00:25:39.889 ] 00:25:39.889 }' 00:25:39.889 12:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:39.889 12:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:39.889 12:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:39.889 12:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:39.889 12:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:39.889 12:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.889 12:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:39.889 [2024-12-05 12:57:22.367428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:39.889 [2024-12-05 12:57:22.375947] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:25:39.889 12:57:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.889 12:57:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:25:39.889 [2024-12-05 12:57:22.380329] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:40.909 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:40.909 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:40.909 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:40.909 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:40.909 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:40.909 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:40.909 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:40.909 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.909 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:40.909 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.909 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:40.909 "name": "raid_bdev1", 00:25:40.909 "uuid": "3cd5786a-e0eb-4e12-87d1-4dd3649c92e8", 00:25:40.909 "strip_size_kb": 64, 00:25:40.909 "state": "online", 00:25:40.909 "raid_level": "raid5f", 00:25:40.909 "superblock": true, 00:25:40.909 "num_base_bdevs": 3, 00:25:40.909 "num_base_bdevs_discovered": 3, 00:25:40.909 "num_base_bdevs_operational": 3, 00:25:40.909 "process": { 00:25:40.909 "type": "rebuild", 00:25:40.909 "target": "spare", 00:25:40.909 "progress": { 00:25:40.909 "blocks": 20480, 00:25:40.909 "percent": 16 00:25:40.909 } 00:25:40.909 }, 00:25:40.909 "base_bdevs_list": [ 00:25:40.909 { 00:25:40.909 "name": "spare", 00:25:40.909 "uuid": "8ca7b0e0-378e-58b8-9743-6a85be6d2832", 00:25:40.909 "is_configured": true, 00:25:40.909 "data_offset": 2048, 00:25:40.909 "data_size": 63488 00:25:40.909 }, 00:25:40.909 { 00:25:40.909 "name": "BaseBdev2", 00:25:40.909 "uuid": "e6d40a14-3257-5cc1-81e6-6c273a3d4659", 00:25:40.909 "is_configured": true, 00:25:40.909 "data_offset": 2048, 00:25:40.909 "data_size": 63488 00:25:40.909 }, 00:25:40.909 { 00:25:40.909 "name": "BaseBdev3", 00:25:40.909 "uuid": "4faffc79-2ece-5149-a73c-2a0cb14f0b8a", 00:25:40.909 "is_configured": true, 00:25:40.909 "data_offset": 2048, 00:25:40.909 "data_size": 63488 00:25:40.909 } 00:25:40.909 ] 00:25:40.909 }' 00:25:40.909 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:40.909 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:40.909 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:40.909 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:40.909 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:25:40.909 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:25:40.909 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:25:40.909 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:25:40.909 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:25:40.909 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=436 00:25:40.909 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:40.909 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:40.909 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:40.909 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:40.909 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:40.909 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:40.909 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:40.909 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.909 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:40.909 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:40.909 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.167 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:41.167 "name": "raid_bdev1", 00:25:41.167 "uuid": "3cd5786a-e0eb-4e12-87d1-4dd3649c92e8", 00:25:41.167 "strip_size_kb": 64, 00:25:41.167 "state": "online", 00:25:41.167 "raid_level": "raid5f", 00:25:41.167 "superblock": true, 00:25:41.167 "num_base_bdevs": 3, 00:25:41.167 "num_base_bdevs_discovered": 3, 00:25:41.167 "num_base_bdevs_operational": 3, 00:25:41.167 "process": { 00:25:41.167 "type": "rebuild", 00:25:41.167 "target": "spare", 00:25:41.167 "progress": { 00:25:41.167 "blocks": 20480, 00:25:41.167 "percent": 16 00:25:41.167 } 00:25:41.167 }, 00:25:41.167 "base_bdevs_list": [ 00:25:41.167 { 00:25:41.167 "name": "spare", 00:25:41.167 "uuid": "8ca7b0e0-378e-58b8-9743-6a85be6d2832", 00:25:41.167 "is_configured": true, 00:25:41.167 "data_offset": 2048, 00:25:41.167 "data_size": 63488 00:25:41.167 }, 00:25:41.167 { 00:25:41.167 "name": "BaseBdev2", 00:25:41.167 "uuid": "e6d40a14-3257-5cc1-81e6-6c273a3d4659", 00:25:41.167 "is_configured": true, 00:25:41.167 "data_offset": 2048, 00:25:41.167 "data_size": 63488 00:25:41.167 }, 00:25:41.167 { 00:25:41.167 "name": "BaseBdev3", 00:25:41.167 "uuid": "4faffc79-2ece-5149-a73c-2a0cb14f0b8a", 00:25:41.167 "is_configured": true, 00:25:41.167 "data_offset": 2048, 00:25:41.167 "data_size": 63488 00:25:41.167 } 00:25:41.167 ] 00:25:41.167 }' 00:25:41.167 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:41.167 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:41.167 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:41.167 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:41.167 12:57:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:42.098 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:42.098 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:42.098 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:42.098 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:42.098 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:42.098 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:42.098 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:42.098 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:42.098 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.098 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:42.098 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.098 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:42.098 "name": "raid_bdev1", 00:25:42.098 "uuid": "3cd5786a-e0eb-4e12-87d1-4dd3649c92e8", 00:25:42.098 "strip_size_kb": 64, 00:25:42.098 "state": "online", 00:25:42.098 "raid_level": "raid5f", 00:25:42.098 "superblock": true, 00:25:42.098 "num_base_bdevs": 3, 00:25:42.098 "num_base_bdevs_discovered": 3, 00:25:42.098 "num_base_bdevs_operational": 3, 00:25:42.098 "process": { 00:25:42.098 "type": "rebuild", 00:25:42.098 "target": "spare", 00:25:42.098 "progress": { 00:25:42.098 "blocks": 43008, 00:25:42.098 "percent": 33 00:25:42.098 } 00:25:42.098 }, 00:25:42.098 "base_bdevs_list": [ 00:25:42.098 { 00:25:42.098 "name": "spare", 00:25:42.098 "uuid": "8ca7b0e0-378e-58b8-9743-6a85be6d2832", 00:25:42.098 "is_configured": true, 00:25:42.098 "data_offset": 2048, 00:25:42.098 "data_size": 63488 00:25:42.098 }, 00:25:42.098 { 00:25:42.098 "name": "BaseBdev2", 00:25:42.098 "uuid": "e6d40a14-3257-5cc1-81e6-6c273a3d4659", 00:25:42.099 "is_configured": true, 00:25:42.099 "data_offset": 2048, 00:25:42.099 "data_size": 63488 00:25:42.099 }, 00:25:42.099 { 00:25:42.099 "name": "BaseBdev3", 00:25:42.099 "uuid": "4faffc79-2ece-5149-a73c-2a0cb14f0b8a", 00:25:42.099 "is_configured": true, 00:25:42.099 "data_offset": 2048, 00:25:42.099 "data_size": 63488 00:25:42.099 } 00:25:42.099 ] 00:25:42.099 }' 00:25:42.099 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:42.099 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:42.099 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:42.099 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:42.099 12:57:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:43.470 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:43.470 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:43.470 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:43.470 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:43.470 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:43.470 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:43.470 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:43.470 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:43.470 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.470 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:43.470 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.470 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:43.470 "name": "raid_bdev1", 00:25:43.470 "uuid": "3cd5786a-e0eb-4e12-87d1-4dd3649c92e8", 00:25:43.470 "strip_size_kb": 64, 00:25:43.470 "state": "online", 00:25:43.470 "raid_level": "raid5f", 00:25:43.470 "superblock": true, 00:25:43.470 "num_base_bdevs": 3, 00:25:43.470 "num_base_bdevs_discovered": 3, 00:25:43.470 "num_base_bdevs_operational": 3, 00:25:43.470 "process": { 00:25:43.470 "type": "rebuild", 00:25:43.470 "target": "spare", 00:25:43.470 "progress": { 00:25:43.470 "blocks": 65536, 00:25:43.470 "percent": 51 00:25:43.470 } 00:25:43.470 }, 00:25:43.470 "base_bdevs_list": [ 00:25:43.470 { 00:25:43.470 "name": "spare", 00:25:43.470 "uuid": "8ca7b0e0-378e-58b8-9743-6a85be6d2832", 00:25:43.470 "is_configured": true, 00:25:43.470 "data_offset": 2048, 00:25:43.470 "data_size": 63488 00:25:43.470 }, 00:25:43.470 { 00:25:43.470 "name": "BaseBdev2", 00:25:43.470 "uuid": "e6d40a14-3257-5cc1-81e6-6c273a3d4659", 00:25:43.470 "is_configured": true, 00:25:43.470 "data_offset": 2048, 00:25:43.470 "data_size": 63488 00:25:43.470 }, 00:25:43.470 { 00:25:43.470 "name": "BaseBdev3", 00:25:43.470 "uuid": "4faffc79-2ece-5149-a73c-2a0cb14f0b8a", 00:25:43.470 "is_configured": true, 00:25:43.470 "data_offset": 2048, 00:25:43.470 "data_size": 63488 00:25:43.471 } 00:25:43.471 ] 00:25:43.471 }' 00:25:43.471 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:43.471 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:43.471 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:43.471 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:43.471 12:57:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:44.415 12:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:44.415 12:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:44.415 12:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:44.415 12:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:44.415 12:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:44.415 12:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:44.415 12:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:44.415 12:57:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.415 12:57:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:44.415 12:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:44.415 12:57:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.415 12:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:44.415 "name": "raid_bdev1", 00:25:44.415 "uuid": "3cd5786a-e0eb-4e12-87d1-4dd3649c92e8", 00:25:44.415 "strip_size_kb": 64, 00:25:44.415 "state": "online", 00:25:44.415 "raid_level": "raid5f", 00:25:44.415 "superblock": true, 00:25:44.415 "num_base_bdevs": 3, 00:25:44.415 "num_base_bdevs_discovered": 3, 00:25:44.415 "num_base_bdevs_operational": 3, 00:25:44.415 "process": { 00:25:44.415 "type": "rebuild", 00:25:44.415 "target": "spare", 00:25:44.415 "progress": { 00:25:44.415 "blocks": 88064, 00:25:44.415 "percent": 69 00:25:44.415 } 00:25:44.415 }, 00:25:44.415 "base_bdevs_list": [ 00:25:44.415 { 00:25:44.415 "name": "spare", 00:25:44.415 "uuid": "8ca7b0e0-378e-58b8-9743-6a85be6d2832", 00:25:44.415 "is_configured": true, 00:25:44.415 "data_offset": 2048, 00:25:44.415 "data_size": 63488 00:25:44.415 }, 00:25:44.415 { 00:25:44.415 "name": "BaseBdev2", 00:25:44.415 "uuid": "e6d40a14-3257-5cc1-81e6-6c273a3d4659", 00:25:44.415 "is_configured": true, 00:25:44.415 "data_offset": 2048, 00:25:44.415 "data_size": 63488 00:25:44.415 }, 00:25:44.415 { 00:25:44.415 "name": "BaseBdev3", 00:25:44.415 "uuid": "4faffc79-2ece-5149-a73c-2a0cb14f0b8a", 00:25:44.415 "is_configured": true, 00:25:44.415 "data_offset": 2048, 00:25:44.415 "data_size": 63488 00:25:44.415 } 00:25:44.415 ] 00:25:44.415 }' 00:25:44.415 12:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:44.415 12:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:44.415 12:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:44.415 12:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:44.415 12:57:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:45.385 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:45.385 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:45.385 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:45.385 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:45.385 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:45.385 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:45.385 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:45.385 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:45.385 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.385 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:45.385 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.385 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:45.385 "name": "raid_bdev1", 00:25:45.385 "uuid": "3cd5786a-e0eb-4e12-87d1-4dd3649c92e8", 00:25:45.385 "strip_size_kb": 64, 00:25:45.385 "state": "online", 00:25:45.385 "raid_level": "raid5f", 00:25:45.385 "superblock": true, 00:25:45.385 "num_base_bdevs": 3, 00:25:45.385 "num_base_bdevs_discovered": 3, 00:25:45.385 "num_base_bdevs_operational": 3, 00:25:45.385 "process": { 00:25:45.385 "type": "rebuild", 00:25:45.385 "target": "spare", 00:25:45.385 "progress": { 00:25:45.385 "blocks": 110592, 00:25:45.385 "percent": 87 00:25:45.385 } 00:25:45.385 }, 00:25:45.385 "base_bdevs_list": [ 00:25:45.385 { 00:25:45.385 "name": "spare", 00:25:45.385 "uuid": "8ca7b0e0-378e-58b8-9743-6a85be6d2832", 00:25:45.385 "is_configured": true, 00:25:45.385 "data_offset": 2048, 00:25:45.385 "data_size": 63488 00:25:45.385 }, 00:25:45.385 { 00:25:45.385 "name": "BaseBdev2", 00:25:45.385 "uuid": "e6d40a14-3257-5cc1-81e6-6c273a3d4659", 00:25:45.385 "is_configured": true, 00:25:45.385 "data_offset": 2048, 00:25:45.385 "data_size": 63488 00:25:45.385 }, 00:25:45.385 { 00:25:45.385 "name": "BaseBdev3", 00:25:45.385 "uuid": "4faffc79-2ece-5149-a73c-2a0cb14f0b8a", 00:25:45.385 "is_configured": true, 00:25:45.385 "data_offset": 2048, 00:25:45.385 "data_size": 63488 00:25:45.385 } 00:25:45.385 ] 00:25:45.385 }' 00:25:45.385 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:45.385 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:45.385 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:45.645 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:45.645 12:57:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:46.212 [2024-12-05 12:57:28.630233] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:46.212 [2024-12-05 12:57:28.630450] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:46.212 [2024-12-05 12:57:28.630568] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:46.471 12:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:46.471 12:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:46.471 12:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:46.471 12:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:46.471 12:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:46.471 12:57:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:46.471 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:46.471 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.471 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:46.471 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:46.471 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.471 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:46.471 "name": "raid_bdev1", 00:25:46.471 "uuid": "3cd5786a-e0eb-4e12-87d1-4dd3649c92e8", 00:25:46.471 "strip_size_kb": 64, 00:25:46.471 "state": "online", 00:25:46.471 "raid_level": "raid5f", 00:25:46.471 "superblock": true, 00:25:46.471 "num_base_bdevs": 3, 00:25:46.471 "num_base_bdevs_discovered": 3, 00:25:46.471 "num_base_bdevs_operational": 3, 00:25:46.471 "base_bdevs_list": [ 00:25:46.471 { 00:25:46.471 "name": "spare", 00:25:46.471 "uuid": "8ca7b0e0-378e-58b8-9743-6a85be6d2832", 00:25:46.471 "is_configured": true, 00:25:46.471 "data_offset": 2048, 00:25:46.471 "data_size": 63488 00:25:46.471 }, 00:25:46.471 { 00:25:46.471 "name": "BaseBdev2", 00:25:46.471 "uuid": "e6d40a14-3257-5cc1-81e6-6c273a3d4659", 00:25:46.471 "is_configured": true, 00:25:46.471 "data_offset": 2048, 00:25:46.471 "data_size": 63488 00:25:46.471 }, 00:25:46.471 { 00:25:46.471 "name": "BaseBdev3", 00:25:46.471 "uuid": "4faffc79-2ece-5149-a73c-2a0cb14f0b8a", 00:25:46.471 "is_configured": true, 00:25:46.471 "data_offset": 2048, 00:25:46.471 "data_size": 63488 00:25:46.471 } 00:25:46.471 ] 00:25:46.471 }' 00:25:46.471 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:46.729 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:46.729 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:46.729 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:25:46.729 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:25:46.729 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:46.729 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:46.729 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:46.729 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:46.729 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:46.729 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:46.729 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.729 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:46.729 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:46.729 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.729 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:46.729 "name": "raid_bdev1", 00:25:46.729 "uuid": "3cd5786a-e0eb-4e12-87d1-4dd3649c92e8", 00:25:46.729 "strip_size_kb": 64, 00:25:46.729 "state": "online", 00:25:46.729 "raid_level": "raid5f", 00:25:46.729 "superblock": true, 00:25:46.729 "num_base_bdevs": 3, 00:25:46.729 "num_base_bdevs_discovered": 3, 00:25:46.729 "num_base_bdevs_operational": 3, 00:25:46.729 "base_bdevs_list": [ 00:25:46.729 { 00:25:46.729 "name": "spare", 00:25:46.729 "uuid": "8ca7b0e0-378e-58b8-9743-6a85be6d2832", 00:25:46.729 "is_configured": true, 00:25:46.729 "data_offset": 2048, 00:25:46.729 "data_size": 63488 00:25:46.729 }, 00:25:46.729 { 00:25:46.729 "name": "BaseBdev2", 00:25:46.729 "uuid": "e6d40a14-3257-5cc1-81e6-6c273a3d4659", 00:25:46.729 "is_configured": true, 00:25:46.729 "data_offset": 2048, 00:25:46.729 "data_size": 63488 00:25:46.729 }, 00:25:46.729 { 00:25:46.729 "name": "BaseBdev3", 00:25:46.729 "uuid": "4faffc79-2ece-5149-a73c-2a0cb14f0b8a", 00:25:46.729 "is_configured": true, 00:25:46.729 "data_offset": 2048, 00:25:46.729 "data_size": 63488 00:25:46.729 } 00:25:46.729 ] 00:25:46.729 }' 00:25:46.729 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:46.729 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:46.729 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:46.729 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:46.729 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:46.729 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:46.729 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:46.729 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:46.729 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:46.729 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:46.729 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:46.729 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:46.729 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:46.729 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:46.729 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:46.729 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.729 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:46.729 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:46.729 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.729 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:46.730 "name": "raid_bdev1", 00:25:46.730 "uuid": "3cd5786a-e0eb-4e12-87d1-4dd3649c92e8", 00:25:46.730 "strip_size_kb": 64, 00:25:46.730 "state": "online", 00:25:46.730 "raid_level": "raid5f", 00:25:46.730 "superblock": true, 00:25:46.730 "num_base_bdevs": 3, 00:25:46.730 "num_base_bdevs_discovered": 3, 00:25:46.730 "num_base_bdevs_operational": 3, 00:25:46.730 "base_bdevs_list": [ 00:25:46.730 { 00:25:46.730 "name": "spare", 00:25:46.730 "uuid": "8ca7b0e0-378e-58b8-9743-6a85be6d2832", 00:25:46.730 "is_configured": true, 00:25:46.730 "data_offset": 2048, 00:25:46.730 "data_size": 63488 00:25:46.730 }, 00:25:46.730 { 00:25:46.730 "name": "BaseBdev2", 00:25:46.730 "uuid": "e6d40a14-3257-5cc1-81e6-6c273a3d4659", 00:25:46.730 "is_configured": true, 00:25:46.730 "data_offset": 2048, 00:25:46.730 "data_size": 63488 00:25:46.730 }, 00:25:46.730 { 00:25:46.730 "name": "BaseBdev3", 00:25:46.730 "uuid": "4faffc79-2ece-5149-a73c-2a0cb14f0b8a", 00:25:46.730 "is_configured": true, 00:25:46.730 "data_offset": 2048, 00:25:46.730 "data_size": 63488 00:25:46.730 } 00:25:46.730 ] 00:25:46.730 }' 00:25:46.730 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:46.730 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:46.988 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:46.988 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.988 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:46.988 [2024-12-05 12:57:29.528709] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:46.988 [2024-12-05 12:57:29.528818] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:46.988 [2024-12-05 12:57:29.528889] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:46.988 [2024-12-05 12:57:29.528957] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:46.988 [2024-12-05 12:57:29.528968] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:46.988 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.988 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:46.988 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:25:46.988 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.988 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:46.988 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.988 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:25:46.988 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:25:46.988 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:25:46.988 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:46.988 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:25:46.988 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:46.988 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:46.988 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:46.988 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:46.988 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:25:46.988 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:46.988 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:46.988 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:47.247 /dev/nbd0 00:25:47.247 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:47.247 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:47.247 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:25:47.247 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:25:47.247 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:47.247 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:47.247 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:25:47.247 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:25:47.247 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:47.247 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:47.247 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:47.247 1+0 records in 00:25:47.247 1+0 records out 00:25:47.247 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213607 s, 19.2 MB/s 00:25:47.247 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:47.247 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:25:47.247 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:47.247 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:47.247 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:25:47.247 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:47.247 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:47.247 12:57:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:25:47.505 /dev/nbd1 00:25:47.505 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:47.505 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:47.505 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:25:47.505 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:25:47.505 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:47.505 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:47.505 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:25:47.505 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:25:47.505 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:47.505 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:47.505 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:47.505 1+0 records in 00:25:47.505 1+0 records out 00:25:47.505 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031418 s, 13.0 MB/s 00:25:47.505 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:47.505 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:25:47.505 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:47.505 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:47.505 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:25:47.505 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:47.505 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:47.505 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:47.763 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:25:47.763 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:25:47.763 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:47.763 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:47.763 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:25:47.763 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:47.763 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:25:48.021 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:48.021 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:48.021 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:48.021 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:48.021 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:48.021 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:48.021 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:25:48.021 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:25:48.021 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:48.021 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:25:48.021 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:48.021 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:48.021 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:48.021 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:48.021 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:48.021 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:48.021 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:25:48.021 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:25:48.021 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:25:48.021 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:25:48.021 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.021 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:48.021 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.021 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:48.021 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.021 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:48.021 [2024-12-05 12:57:30.588642] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:48.021 [2024-12-05 12:57:30.588691] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:48.021 [2024-12-05 12:57:30.588707] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:25:48.021 [2024-12-05 12:57:30.588716] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:48.021 [2024-12-05 12:57:30.590609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:48.021 [2024-12-05 12:57:30.590642] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:48.021 [2024-12-05 12:57:30.590716] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:48.021 [2024-12-05 12:57:30.590756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:48.021 [2024-12-05 12:57:30.590858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:48.021 [2024-12-05 12:57:30.590942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:48.021 spare 00:25:48.021 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.021 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:25:48.021 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.021 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:48.279 [2024-12-05 12:57:30.691019] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:25:48.279 [2024-12-05 12:57:30.691185] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:48.279 [2024-12-05 12:57:30.691471] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:25:48.279 [2024-12-05 12:57:30.694369] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:25:48.279 [2024-12-05 12:57:30.694458] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:25:48.279 [2024-12-05 12:57:30.694697] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:48.279 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.279 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:48.279 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:48.279 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:48.279 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:48.279 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:48.279 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:48.279 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:48.279 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:48.279 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:48.279 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:48.279 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:48.279 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.279 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:48.279 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:48.279 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.279 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:48.279 "name": "raid_bdev1", 00:25:48.279 "uuid": "3cd5786a-e0eb-4e12-87d1-4dd3649c92e8", 00:25:48.279 "strip_size_kb": 64, 00:25:48.279 "state": "online", 00:25:48.279 "raid_level": "raid5f", 00:25:48.280 "superblock": true, 00:25:48.280 "num_base_bdevs": 3, 00:25:48.280 "num_base_bdevs_discovered": 3, 00:25:48.280 "num_base_bdevs_operational": 3, 00:25:48.280 "base_bdevs_list": [ 00:25:48.280 { 00:25:48.280 "name": "spare", 00:25:48.280 "uuid": "8ca7b0e0-378e-58b8-9743-6a85be6d2832", 00:25:48.280 "is_configured": true, 00:25:48.280 "data_offset": 2048, 00:25:48.280 "data_size": 63488 00:25:48.280 }, 00:25:48.280 { 00:25:48.280 "name": "BaseBdev2", 00:25:48.280 "uuid": "e6d40a14-3257-5cc1-81e6-6c273a3d4659", 00:25:48.280 "is_configured": true, 00:25:48.280 "data_offset": 2048, 00:25:48.280 "data_size": 63488 00:25:48.280 }, 00:25:48.280 { 00:25:48.280 "name": "BaseBdev3", 00:25:48.280 "uuid": "4faffc79-2ece-5149-a73c-2a0cb14f0b8a", 00:25:48.280 "is_configured": true, 00:25:48.280 "data_offset": 2048, 00:25:48.280 "data_size": 63488 00:25:48.280 } 00:25:48.280 ] 00:25:48.280 }' 00:25:48.280 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:48.280 12:57:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:48.537 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:48.537 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:48.537 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:48.537 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:48.537 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:48.537 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:48.537 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.537 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:48.537 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:48.537 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.537 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:48.537 "name": "raid_bdev1", 00:25:48.537 "uuid": "3cd5786a-e0eb-4e12-87d1-4dd3649c92e8", 00:25:48.537 "strip_size_kb": 64, 00:25:48.537 "state": "online", 00:25:48.537 "raid_level": "raid5f", 00:25:48.537 "superblock": true, 00:25:48.537 "num_base_bdevs": 3, 00:25:48.537 "num_base_bdevs_discovered": 3, 00:25:48.537 "num_base_bdevs_operational": 3, 00:25:48.537 "base_bdevs_list": [ 00:25:48.537 { 00:25:48.537 "name": "spare", 00:25:48.537 "uuid": "8ca7b0e0-378e-58b8-9743-6a85be6d2832", 00:25:48.537 "is_configured": true, 00:25:48.537 "data_offset": 2048, 00:25:48.537 "data_size": 63488 00:25:48.537 }, 00:25:48.537 { 00:25:48.537 "name": "BaseBdev2", 00:25:48.537 "uuid": "e6d40a14-3257-5cc1-81e6-6c273a3d4659", 00:25:48.537 "is_configured": true, 00:25:48.537 "data_offset": 2048, 00:25:48.537 "data_size": 63488 00:25:48.537 }, 00:25:48.537 { 00:25:48.537 "name": "BaseBdev3", 00:25:48.537 "uuid": "4faffc79-2ece-5149-a73c-2a0cb14f0b8a", 00:25:48.537 "is_configured": true, 00:25:48.537 "data_offset": 2048, 00:25:48.537 "data_size": 63488 00:25:48.537 } 00:25:48.537 ] 00:25:48.537 }' 00:25:48.537 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:48.537 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:48.537 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:48.537 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:48.537 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:48.537 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:48.537 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.537 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:48.794 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.794 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:25:48.794 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:25:48.794 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.794 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:48.794 [2024-12-05 12:57:31.150742] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:48.794 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.794 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:25:48.794 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:48.794 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:48.794 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:48.794 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:48.794 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:48.795 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:48.795 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:48.795 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:48.795 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:48.795 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:48.795 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.795 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:48.795 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:48.795 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.795 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:48.795 "name": "raid_bdev1", 00:25:48.795 "uuid": "3cd5786a-e0eb-4e12-87d1-4dd3649c92e8", 00:25:48.795 "strip_size_kb": 64, 00:25:48.795 "state": "online", 00:25:48.795 "raid_level": "raid5f", 00:25:48.795 "superblock": true, 00:25:48.795 "num_base_bdevs": 3, 00:25:48.795 "num_base_bdevs_discovered": 2, 00:25:48.795 "num_base_bdevs_operational": 2, 00:25:48.795 "base_bdevs_list": [ 00:25:48.795 { 00:25:48.795 "name": null, 00:25:48.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:48.795 "is_configured": false, 00:25:48.795 "data_offset": 0, 00:25:48.795 "data_size": 63488 00:25:48.795 }, 00:25:48.795 { 00:25:48.795 "name": "BaseBdev2", 00:25:48.795 "uuid": "e6d40a14-3257-5cc1-81e6-6c273a3d4659", 00:25:48.795 "is_configured": true, 00:25:48.795 "data_offset": 2048, 00:25:48.795 "data_size": 63488 00:25:48.795 }, 00:25:48.795 { 00:25:48.795 "name": "BaseBdev3", 00:25:48.795 "uuid": "4faffc79-2ece-5149-a73c-2a0cb14f0b8a", 00:25:48.795 "is_configured": true, 00:25:48.795 "data_offset": 2048, 00:25:48.795 "data_size": 63488 00:25:48.795 } 00:25:48.795 ] 00:25:48.795 }' 00:25:48.795 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:48.795 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.051 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:49.051 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.051 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.051 [2024-12-05 12:57:31.474820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:49.051 [2024-12-05 12:57:31.475088] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:49.051 [2024-12-05 12:57:31.475108] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:49.051 [2024-12-05 12:57:31.475141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:49.051 [2024-12-05 12:57:31.483513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:25:49.051 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.051 12:57:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:25:49.051 [2024-12-05 12:57:31.487910] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:49.981 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:49.981 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:49.981 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:49.981 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:49.981 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:49.981 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:49.981 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.981 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.981 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:49.981 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.981 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:49.981 "name": "raid_bdev1", 00:25:49.981 "uuid": "3cd5786a-e0eb-4e12-87d1-4dd3649c92e8", 00:25:49.981 "strip_size_kb": 64, 00:25:49.981 "state": "online", 00:25:49.981 "raid_level": "raid5f", 00:25:49.981 "superblock": true, 00:25:49.981 "num_base_bdevs": 3, 00:25:49.982 "num_base_bdevs_discovered": 3, 00:25:49.982 "num_base_bdevs_operational": 3, 00:25:49.982 "process": { 00:25:49.982 "type": "rebuild", 00:25:49.982 "target": "spare", 00:25:49.982 "progress": { 00:25:49.982 "blocks": 20480, 00:25:49.982 "percent": 16 00:25:49.982 } 00:25:49.982 }, 00:25:49.982 "base_bdevs_list": [ 00:25:49.982 { 00:25:49.982 "name": "spare", 00:25:49.982 "uuid": "8ca7b0e0-378e-58b8-9743-6a85be6d2832", 00:25:49.982 "is_configured": true, 00:25:49.982 "data_offset": 2048, 00:25:49.982 "data_size": 63488 00:25:49.982 }, 00:25:49.982 { 00:25:49.982 "name": "BaseBdev2", 00:25:49.982 "uuid": "e6d40a14-3257-5cc1-81e6-6c273a3d4659", 00:25:49.982 "is_configured": true, 00:25:49.982 "data_offset": 2048, 00:25:49.982 "data_size": 63488 00:25:49.982 }, 00:25:49.982 { 00:25:49.982 "name": "BaseBdev3", 00:25:49.982 "uuid": "4faffc79-2ece-5149-a73c-2a0cb14f0b8a", 00:25:49.982 "is_configured": true, 00:25:49.982 "data_offset": 2048, 00:25:49.982 "data_size": 63488 00:25:49.982 } 00:25:49.982 ] 00:25:49.982 }' 00:25:49.982 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:49.982 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:49.982 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:50.238 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:50.238 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:25:50.238 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.238 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:50.238 [2024-12-05 12:57:32.593157] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:50.238 [2024-12-05 12:57:32.597355] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:50.238 [2024-12-05 12:57:32.597522] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:50.238 [2024-12-05 12:57:32.597540] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:50.238 [2024-12-05 12:57:32.597549] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:50.238 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.239 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:25:50.239 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:50.239 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:50.239 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:50.239 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:50.239 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:50.239 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:50.239 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:50.239 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:50.239 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:50.239 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:50.239 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:50.239 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.239 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:50.239 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.239 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:50.239 "name": "raid_bdev1", 00:25:50.239 "uuid": "3cd5786a-e0eb-4e12-87d1-4dd3649c92e8", 00:25:50.239 "strip_size_kb": 64, 00:25:50.239 "state": "online", 00:25:50.239 "raid_level": "raid5f", 00:25:50.239 "superblock": true, 00:25:50.239 "num_base_bdevs": 3, 00:25:50.239 "num_base_bdevs_discovered": 2, 00:25:50.239 "num_base_bdevs_operational": 2, 00:25:50.239 "base_bdevs_list": [ 00:25:50.239 { 00:25:50.239 "name": null, 00:25:50.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.239 "is_configured": false, 00:25:50.239 "data_offset": 0, 00:25:50.239 "data_size": 63488 00:25:50.239 }, 00:25:50.239 { 00:25:50.239 "name": "BaseBdev2", 00:25:50.239 "uuid": "e6d40a14-3257-5cc1-81e6-6c273a3d4659", 00:25:50.239 "is_configured": true, 00:25:50.239 "data_offset": 2048, 00:25:50.239 "data_size": 63488 00:25:50.239 }, 00:25:50.239 { 00:25:50.239 "name": "BaseBdev3", 00:25:50.239 "uuid": "4faffc79-2ece-5149-a73c-2a0cb14f0b8a", 00:25:50.239 "is_configured": true, 00:25:50.239 "data_offset": 2048, 00:25:50.239 "data_size": 63488 00:25:50.239 } 00:25:50.239 ] 00:25:50.239 }' 00:25:50.239 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:50.239 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:50.497 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:50.497 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.497 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:50.497 [2024-12-05 12:57:32.923920] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:50.497 [2024-12-05 12:57:32.923976] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:50.497 [2024-12-05 12:57:32.923993] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:25:50.497 [2024-12-05 12:57:32.924018] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:50.497 [2024-12-05 12:57:32.924439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:50.497 [2024-12-05 12:57:32.924466] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:50.497 [2024-12-05 12:57:32.924555] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:50.497 [2024-12-05 12:57:32.924572] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:50.497 [2024-12-05 12:57:32.924581] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:50.497 [2024-12-05 12:57:32.924602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:50.497 [2024-12-05 12:57:32.932952] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:25:50.497 spare 00:25:50.497 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.497 12:57:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:25:50.497 [2024-12-05 12:57:32.937450] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:51.435 12:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:51.435 12:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:51.435 12:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:51.435 12:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:51.435 12:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:51.435 12:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:51.435 12:57:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.435 12:57:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.435 12:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:51.435 12:57:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.435 12:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:51.435 "name": "raid_bdev1", 00:25:51.435 "uuid": "3cd5786a-e0eb-4e12-87d1-4dd3649c92e8", 00:25:51.435 "strip_size_kb": 64, 00:25:51.435 "state": "online", 00:25:51.435 "raid_level": "raid5f", 00:25:51.435 "superblock": true, 00:25:51.435 "num_base_bdevs": 3, 00:25:51.435 "num_base_bdevs_discovered": 3, 00:25:51.435 "num_base_bdevs_operational": 3, 00:25:51.435 "process": { 00:25:51.435 "type": "rebuild", 00:25:51.435 "target": "spare", 00:25:51.435 "progress": { 00:25:51.435 "blocks": 18432, 00:25:51.435 "percent": 14 00:25:51.435 } 00:25:51.435 }, 00:25:51.435 "base_bdevs_list": [ 00:25:51.435 { 00:25:51.435 "name": "spare", 00:25:51.435 "uuid": "8ca7b0e0-378e-58b8-9743-6a85be6d2832", 00:25:51.435 "is_configured": true, 00:25:51.435 "data_offset": 2048, 00:25:51.435 "data_size": 63488 00:25:51.435 }, 00:25:51.435 { 00:25:51.435 "name": "BaseBdev2", 00:25:51.435 "uuid": "e6d40a14-3257-5cc1-81e6-6c273a3d4659", 00:25:51.435 "is_configured": true, 00:25:51.435 "data_offset": 2048, 00:25:51.435 "data_size": 63488 00:25:51.435 }, 00:25:51.435 { 00:25:51.435 "name": "BaseBdev3", 00:25:51.435 "uuid": "4faffc79-2ece-5149-a73c-2a0cb14f0b8a", 00:25:51.435 "is_configured": true, 00:25:51.435 "data_offset": 2048, 00:25:51.435 "data_size": 63488 00:25:51.435 } 00:25:51.435 ] 00:25:51.435 }' 00:25:51.435 12:57:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:51.435 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:51.435 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:51.693 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:51.693 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:25:51.693 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.693 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.693 [2024-12-05 12:57:34.042661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:51.693 [2024-12-05 12:57:34.046648] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:51.693 [2024-12-05 12:57:34.046796] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:51.693 [2024-12-05 12:57:34.046816] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:51.693 [2024-12-05 12:57:34.046823] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:51.694 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.694 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:25:51.694 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:51.694 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:51.694 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:51.694 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:51.694 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:51.694 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:51.694 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:51.694 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:51.694 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:51.694 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:51.694 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.694 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.694 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:51.694 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.694 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:51.694 "name": "raid_bdev1", 00:25:51.694 "uuid": "3cd5786a-e0eb-4e12-87d1-4dd3649c92e8", 00:25:51.694 "strip_size_kb": 64, 00:25:51.694 "state": "online", 00:25:51.694 "raid_level": "raid5f", 00:25:51.694 "superblock": true, 00:25:51.694 "num_base_bdevs": 3, 00:25:51.694 "num_base_bdevs_discovered": 2, 00:25:51.694 "num_base_bdevs_operational": 2, 00:25:51.694 "base_bdevs_list": [ 00:25:51.694 { 00:25:51.694 "name": null, 00:25:51.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.694 "is_configured": false, 00:25:51.694 "data_offset": 0, 00:25:51.694 "data_size": 63488 00:25:51.694 }, 00:25:51.694 { 00:25:51.694 "name": "BaseBdev2", 00:25:51.694 "uuid": "e6d40a14-3257-5cc1-81e6-6c273a3d4659", 00:25:51.694 "is_configured": true, 00:25:51.694 "data_offset": 2048, 00:25:51.694 "data_size": 63488 00:25:51.694 }, 00:25:51.694 { 00:25:51.694 "name": "BaseBdev3", 00:25:51.694 "uuid": "4faffc79-2ece-5149-a73c-2a0cb14f0b8a", 00:25:51.694 "is_configured": true, 00:25:51.694 "data_offset": 2048, 00:25:51.694 "data_size": 63488 00:25:51.694 } 00:25:51.694 ] 00:25:51.694 }' 00:25:51.694 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:51.694 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.953 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:51.953 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:51.953 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:51.953 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:51.953 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:51.953 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:51.953 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.953 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.953 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:51.953 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.953 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:51.953 "name": "raid_bdev1", 00:25:51.953 "uuid": "3cd5786a-e0eb-4e12-87d1-4dd3649c92e8", 00:25:51.953 "strip_size_kb": 64, 00:25:51.953 "state": "online", 00:25:51.953 "raid_level": "raid5f", 00:25:51.953 "superblock": true, 00:25:51.953 "num_base_bdevs": 3, 00:25:51.953 "num_base_bdevs_discovered": 2, 00:25:51.953 "num_base_bdevs_operational": 2, 00:25:51.953 "base_bdevs_list": [ 00:25:51.953 { 00:25:51.953 "name": null, 00:25:51.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.953 "is_configured": false, 00:25:51.953 "data_offset": 0, 00:25:51.953 "data_size": 63488 00:25:51.953 }, 00:25:51.953 { 00:25:51.953 "name": "BaseBdev2", 00:25:51.953 "uuid": "e6d40a14-3257-5cc1-81e6-6c273a3d4659", 00:25:51.953 "is_configured": true, 00:25:51.953 "data_offset": 2048, 00:25:51.953 "data_size": 63488 00:25:51.953 }, 00:25:51.953 { 00:25:51.953 "name": "BaseBdev3", 00:25:51.953 "uuid": "4faffc79-2ece-5149-a73c-2a0cb14f0b8a", 00:25:51.953 "is_configured": true, 00:25:51.953 "data_offset": 2048, 00:25:51.953 "data_size": 63488 00:25:51.953 } 00:25:51.953 ] 00:25:51.953 }' 00:25:51.953 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:51.953 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:51.953 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:51.953 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:51.953 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:25:51.953 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.953 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.953 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.953 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:51.953 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.953 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.953 [2024-12-05 12:57:34.493027] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:51.953 [2024-12-05 12:57:34.493073] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:51.953 [2024-12-05 12:57:34.493093] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:25:51.953 [2024-12-05 12:57:34.493100] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:51.953 [2024-12-05 12:57:34.493461] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:51.953 [2024-12-05 12:57:34.493473] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:51.953 [2024-12-05 12:57:34.493563] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:51.953 [2024-12-05 12:57:34.493577] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:51.953 [2024-12-05 12:57:34.493585] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:51.953 [2024-12-05 12:57:34.493594] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:25:51.953 BaseBdev1 00:25:51.953 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.953 12:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:25:52.952 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:25:52.952 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:52.952 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:52.952 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:52.952 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:52.952 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:52.952 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:52.952 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:52.952 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:52.952 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:52.952 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:52.952 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.952 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:52.952 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:52.952 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.952 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:52.952 "name": "raid_bdev1", 00:25:52.952 "uuid": "3cd5786a-e0eb-4e12-87d1-4dd3649c92e8", 00:25:52.952 "strip_size_kb": 64, 00:25:52.952 "state": "online", 00:25:52.952 "raid_level": "raid5f", 00:25:52.952 "superblock": true, 00:25:52.952 "num_base_bdevs": 3, 00:25:52.952 "num_base_bdevs_discovered": 2, 00:25:52.952 "num_base_bdevs_operational": 2, 00:25:52.952 "base_bdevs_list": [ 00:25:52.952 { 00:25:52.952 "name": null, 00:25:52.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:52.952 "is_configured": false, 00:25:52.952 "data_offset": 0, 00:25:52.952 "data_size": 63488 00:25:52.952 }, 00:25:52.952 { 00:25:52.952 "name": "BaseBdev2", 00:25:52.952 "uuid": "e6d40a14-3257-5cc1-81e6-6c273a3d4659", 00:25:52.952 "is_configured": true, 00:25:52.952 "data_offset": 2048, 00:25:52.952 "data_size": 63488 00:25:52.952 }, 00:25:52.952 { 00:25:52.952 "name": "BaseBdev3", 00:25:52.952 "uuid": "4faffc79-2ece-5149-a73c-2a0cb14f0b8a", 00:25:52.952 "is_configured": true, 00:25:52.952 "data_offset": 2048, 00:25:52.952 "data_size": 63488 00:25:52.952 } 00:25:52.952 ] 00:25:52.952 }' 00:25:52.952 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:52.952 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.518 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:53.518 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:53.518 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:53.518 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:53.518 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:53.518 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:53.518 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:53.518 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.518 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.518 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.518 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:53.518 "name": "raid_bdev1", 00:25:53.518 "uuid": "3cd5786a-e0eb-4e12-87d1-4dd3649c92e8", 00:25:53.518 "strip_size_kb": 64, 00:25:53.518 "state": "online", 00:25:53.518 "raid_level": "raid5f", 00:25:53.518 "superblock": true, 00:25:53.518 "num_base_bdevs": 3, 00:25:53.518 "num_base_bdevs_discovered": 2, 00:25:53.518 "num_base_bdevs_operational": 2, 00:25:53.518 "base_bdevs_list": [ 00:25:53.518 { 00:25:53.518 "name": null, 00:25:53.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:53.518 "is_configured": false, 00:25:53.518 "data_offset": 0, 00:25:53.518 "data_size": 63488 00:25:53.518 }, 00:25:53.518 { 00:25:53.518 "name": "BaseBdev2", 00:25:53.518 "uuid": "e6d40a14-3257-5cc1-81e6-6c273a3d4659", 00:25:53.518 "is_configured": true, 00:25:53.518 "data_offset": 2048, 00:25:53.518 "data_size": 63488 00:25:53.518 }, 00:25:53.518 { 00:25:53.518 "name": "BaseBdev3", 00:25:53.518 "uuid": "4faffc79-2ece-5149-a73c-2a0cb14f0b8a", 00:25:53.518 "is_configured": true, 00:25:53.518 "data_offset": 2048, 00:25:53.518 "data_size": 63488 00:25:53.518 } 00:25:53.518 ] 00:25:53.518 }' 00:25:53.518 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:53.518 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:53.518 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:53.518 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:53.518 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:53.518 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:25:53.518 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:53.518 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:53.518 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:53.518 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:53.518 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:53.518 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:53.518 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.518 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.518 [2024-12-05 12:57:35.933344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:53.518 [2024-12-05 12:57:35.933579] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:53.518 [2024-12-05 12:57:35.933598] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:53.518 request: 00:25:53.518 { 00:25:53.518 "base_bdev": "BaseBdev1", 00:25:53.518 "raid_bdev": "raid_bdev1", 00:25:53.518 "method": "bdev_raid_add_base_bdev", 00:25:53.518 "req_id": 1 00:25:53.518 } 00:25:53.518 Got JSON-RPC error response 00:25:53.518 response: 00:25:53.518 { 00:25:53.518 "code": -22, 00:25:53.518 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:25:53.518 } 00:25:53.518 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:53.518 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:25:53.518 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:53.518 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:53.518 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:53.518 12:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:25:54.453 12:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:25:54.453 12:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:54.453 12:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:54.453 12:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:54.453 12:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:54.453 12:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:54.453 12:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:54.453 12:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:54.453 12:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:54.453 12:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:54.453 12:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:54.453 12:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:54.453 12:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.453 12:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.453 12:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.453 12:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:54.453 "name": "raid_bdev1", 00:25:54.453 "uuid": "3cd5786a-e0eb-4e12-87d1-4dd3649c92e8", 00:25:54.453 "strip_size_kb": 64, 00:25:54.453 "state": "online", 00:25:54.453 "raid_level": "raid5f", 00:25:54.453 "superblock": true, 00:25:54.453 "num_base_bdevs": 3, 00:25:54.453 "num_base_bdevs_discovered": 2, 00:25:54.453 "num_base_bdevs_operational": 2, 00:25:54.453 "base_bdevs_list": [ 00:25:54.453 { 00:25:54.453 "name": null, 00:25:54.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:54.453 "is_configured": false, 00:25:54.453 "data_offset": 0, 00:25:54.453 "data_size": 63488 00:25:54.453 }, 00:25:54.453 { 00:25:54.453 "name": "BaseBdev2", 00:25:54.453 "uuid": "e6d40a14-3257-5cc1-81e6-6c273a3d4659", 00:25:54.453 "is_configured": true, 00:25:54.453 "data_offset": 2048, 00:25:54.453 "data_size": 63488 00:25:54.453 }, 00:25:54.453 { 00:25:54.453 "name": "BaseBdev3", 00:25:54.453 "uuid": "4faffc79-2ece-5149-a73c-2a0cb14f0b8a", 00:25:54.453 "is_configured": true, 00:25:54.453 "data_offset": 2048, 00:25:54.453 "data_size": 63488 00:25:54.453 } 00:25:54.453 ] 00:25:54.453 }' 00:25:54.453 12:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:54.453 12:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.710 12:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:54.710 12:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:54.710 12:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:54.710 12:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:54.710 12:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:54.710 12:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:54.710 12:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.710 12:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.710 12:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:54.710 12:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.968 12:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:54.968 "name": "raid_bdev1", 00:25:54.968 "uuid": "3cd5786a-e0eb-4e12-87d1-4dd3649c92e8", 00:25:54.968 "strip_size_kb": 64, 00:25:54.968 "state": "online", 00:25:54.968 "raid_level": "raid5f", 00:25:54.968 "superblock": true, 00:25:54.968 "num_base_bdevs": 3, 00:25:54.968 "num_base_bdevs_discovered": 2, 00:25:54.968 "num_base_bdevs_operational": 2, 00:25:54.968 "base_bdevs_list": [ 00:25:54.968 { 00:25:54.968 "name": null, 00:25:54.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:54.968 "is_configured": false, 00:25:54.968 "data_offset": 0, 00:25:54.968 "data_size": 63488 00:25:54.968 }, 00:25:54.968 { 00:25:54.968 "name": "BaseBdev2", 00:25:54.968 "uuid": "e6d40a14-3257-5cc1-81e6-6c273a3d4659", 00:25:54.968 "is_configured": true, 00:25:54.968 "data_offset": 2048, 00:25:54.968 "data_size": 63488 00:25:54.968 }, 00:25:54.968 { 00:25:54.968 "name": "BaseBdev3", 00:25:54.968 "uuid": "4faffc79-2ece-5149-a73c-2a0cb14f0b8a", 00:25:54.968 "is_configured": true, 00:25:54.968 "data_offset": 2048, 00:25:54.968 "data_size": 63488 00:25:54.968 } 00:25:54.968 ] 00:25:54.968 }' 00:25:54.968 12:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:54.968 12:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:54.968 12:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:54.968 12:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:54.968 12:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 79467 00:25:54.968 12:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 79467 ']' 00:25:54.968 12:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 79467 00:25:54.968 12:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:25:54.968 12:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:54.968 12:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79467 00:25:54.968 killing process with pid 79467 00:25:54.968 Received shutdown signal, test time was about 60.000000 seconds 00:25:54.968 00:25:54.968 Latency(us) 00:25:54.968 [2024-12-05T12:57:37.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:54.968 [2024-12-05T12:57:37.555Z] =================================================================================================================== 00:25:54.968 [2024-12-05T12:57:37.555Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:54.968 12:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:54.968 12:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:54.968 12:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79467' 00:25:54.968 12:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 79467 00:25:54.968 [2024-12-05 12:57:37.385209] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:54.968 12:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 79467 00:25:54.968 [2024-12-05 12:57:37.385301] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:54.968 [2024-12-05 12:57:37.385353] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:54.968 [2024-12-05 12:57:37.385363] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:25:55.227 [2024-12-05 12:57:37.580546] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:55.792 12:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:25:55.792 00:25:55.793 real 0m19.900s 00:25:55.793 user 0m24.911s 00:25:55.793 sys 0m1.966s 00:25:55.793 12:57:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:55.793 ************************************ 00:25:55.793 END TEST raid5f_rebuild_test_sb 00:25:55.793 ************************************ 00:25:55.793 12:57:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:55.793 12:57:38 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:25:55.793 12:57:38 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:25:55.793 12:57:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:55.793 12:57:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:55.793 12:57:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:55.793 ************************************ 00:25:55.793 START TEST raid5f_state_function_test 00:25:55.793 ************************************ 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80190 00:25:55.793 Process raid pid: 80190 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80190' 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80190 00:25:55.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80190 ']' 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:55.793 12:57:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:55.793 [2024-12-05 12:57:38.253923] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:25:55.793 [2024-12-05 12:57:38.254043] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:56.050 [2024-12-05 12:57:38.412387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.050 [2024-12-05 12:57:38.515283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.397 [2024-12-05 12:57:38.653225] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:56.397 [2024-12-05 12:57:38.653257] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:56.655 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:56.655 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:25:56.655 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:25:56.655 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.655 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:56.655 [2024-12-05 12:57:39.111756] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:56.655 [2024-12-05 12:57:39.111807] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:56.655 [2024-12-05 12:57:39.111818] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:56.655 [2024-12-05 12:57:39.111829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:56.655 [2024-12-05 12:57:39.111836] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:56.655 [2024-12-05 12:57:39.111845] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:56.655 [2024-12-05 12:57:39.111851] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:56.655 [2024-12-05 12:57:39.111861] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:56.655 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.655 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:25:56.655 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:56.655 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:56.655 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:56.655 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:56.655 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:56.655 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:56.655 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:56.655 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:56.655 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:56.655 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:56.655 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:56.655 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.655 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:56.655 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.655 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:56.655 "name": "Existed_Raid", 00:25:56.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:56.655 "strip_size_kb": 64, 00:25:56.655 "state": "configuring", 00:25:56.655 "raid_level": "raid5f", 00:25:56.655 "superblock": false, 00:25:56.655 "num_base_bdevs": 4, 00:25:56.655 "num_base_bdevs_discovered": 0, 00:25:56.655 "num_base_bdevs_operational": 4, 00:25:56.655 "base_bdevs_list": [ 00:25:56.655 { 00:25:56.655 "name": "BaseBdev1", 00:25:56.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:56.655 "is_configured": false, 00:25:56.655 "data_offset": 0, 00:25:56.655 "data_size": 0 00:25:56.655 }, 00:25:56.655 { 00:25:56.655 "name": "BaseBdev2", 00:25:56.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:56.655 "is_configured": false, 00:25:56.655 "data_offset": 0, 00:25:56.655 "data_size": 0 00:25:56.655 }, 00:25:56.655 { 00:25:56.655 "name": "BaseBdev3", 00:25:56.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:56.655 "is_configured": false, 00:25:56.655 "data_offset": 0, 00:25:56.655 "data_size": 0 00:25:56.655 }, 00:25:56.655 { 00:25:56.655 "name": "BaseBdev4", 00:25:56.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:56.655 "is_configured": false, 00:25:56.655 "data_offset": 0, 00:25:56.655 "data_size": 0 00:25:56.655 } 00:25:56.655 ] 00:25:56.655 }' 00:25:56.655 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:56.655 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:56.913 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:56.913 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.913 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:56.913 [2024-12-05 12:57:39.451770] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:56.913 [2024-12-05 12:57:39.451804] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:56.913 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.913 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:25:56.913 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.913 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:56.913 [2024-12-05 12:57:39.463787] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:56.913 [2024-12-05 12:57:39.463908] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:56.913 [2024-12-05 12:57:39.463969] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:56.913 [2024-12-05 12:57:39.463996] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:56.913 [2024-12-05 12:57:39.464049] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:56.913 [2024-12-05 12:57:39.464074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:56.913 [2024-12-05 12:57:39.464092] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:56.913 [2024-12-05 12:57:39.464142] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:56.913 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.913 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:56.913 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.913 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:56.913 [2024-12-05 12:57:39.496368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:56.913 BaseBdev1 00:25:57.170 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.170 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:25:57.170 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:25:57.170 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:57.170 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:57.170 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:57.170 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:57.170 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:57.170 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.170 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.170 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.170 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:57.170 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.170 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.170 [ 00:25:57.170 { 00:25:57.170 "name": "BaseBdev1", 00:25:57.170 "aliases": [ 00:25:57.170 "caf28124-47dd-4146-9805-81460e464fc7" 00:25:57.170 ], 00:25:57.170 "product_name": "Malloc disk", 00:25:57.170 "block_size": 512, 00:25:57.170 "num_blocks": 65536, 00:25:57.170 "uuid": "caf28124-47dd-4146-9805-81460e464fc7", 00:25:57.170 "assigned_rate_limits": { 00:25:57.170 "rw_ios_per_sec": 0, 00:25:57.170 "rw_mbytes_per_sec": 0, 00:25:57.170 "r_mbytes_per_sec": 0, 00:25:57.170 "w_mbytes_per_sec": 0 00:25:57.170 }, 00:25:57.170 "claimed": true, 00:25:57.170 "claim_type": "exclusive_write", 00:25:57.170 "zoned": false, 00:25:57.170 "supported_io_types": { 00:25:57.170 "read": true, 00:25:57.170 "write": true, 00:25:57.170 "unmap": true, 00:25:57.170 "flush": true, 00:25:57.170 "reset": true, 00:25:57.170 "nvme_admin": false, 00:25:57.170 "nvme_io": false, 00:25:57.170 "nvme_io_md": false, 00:25:57.170 "write_zeroes": true, 00:25:57.170 "zcopy": true, 00:25:57.170 "get_zone_info": false, 00:25:57.170 "zone_management": false, 00:25:57.170 "zone_append": false, 00:25:57.170 "compare": false, 00:25:57.170 "compare_and_write": false, 00:25:57.170 "abort": true, 00:25:57.170 "seek_hole": false, 00:25:57.170 "seek_data": false, 00:25:57.170 "copy": true, 00:25:57.170 "nvme_iov_md": false 00:25:57.170 }, 00:25:57.170 "memory_domains": [ 00:25:57.170 { 00:25:57.170 "dma_device_id": "system", 00:25:57.170 "dma_device_type": 1 00:25:57.170 }, 00:25:57.170 { 00:25:57.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:57.170 "dma_device_type": 2 00:25:57.170 } 00:25:57.170 ], 00:25:57.170 "driver_specific": {} 00:25:57.170 } 00:25:57.170 ] 00:25:57.170 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.170 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:57.170 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:25:57.170 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:57.170 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:57.170 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:57.170 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:57.170 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:57.170 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:57.170 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:57.170 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:57.170 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:57.170 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:57.170 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:57.170 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.170 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.170 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.170 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:57.170 "name": "Existed_Raid", 00:25:57.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.170 "strip_size_kb": 64, 00:25:57.170 "state": "configuring", 00:25:57.170 "raid_level": "raid5f", 00:25:57.170 "superblock": false, 00:25:57.170 "num_base_bdevs": 4, 00:25:57.170 "num_base_bdevs_discovered": 1, 00:25:57.170 "num_base_bdevs_operational": 4, 00:25:57.170 "base_bdevs_list": [ 00:25:57.170 { 00:25:57.170 "name": "BaseBdev1", 00:25:57.170 "uuid": "caf28124-47dd-4146-9805-81460e464fc7", 00:25:57.170 "is_configured": true, 00:25:57.170 "data_offset": 0, 00:25:57.170 "data_size": 65536 00:25:57.170 }, 00:25:57.170 { 00:25:57.170 "name": "BaseBdev2", 00:25:57.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.170 "is_configured": false, 00:25:57.170 "data_offset": 0, 00:25:57.170 "data_size": 0 00:25:57.170 }, 00:25:57.170 { 00:25:57.170 "name": "BaseBdev3", 00:25:57.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.170 "is_configured": false, 00:25:57.170 "data_offset": 0, 00:25:57.170 "data_size": 0 00:25:57.170 }, 00:25:57.170 { 00:25:57.170 "name": "BaseBdev4", 00:25:57.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.170 "is_configured": false, 00:25:57.170 "data_offset": 0, 00:25:57.170 "data_size": 0 00:25:57.170 } 00:25:57.170 ] 00:25:57.170 }' 00:25:57.170 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:57.170 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.427 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:57.427 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.427 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.427 [2024-12-05 12:57:39.852523] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:57.427 [2024-12-05 12:57:39.852700] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:25:57.427 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.427 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:25:57.427 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.427 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.427 [2024-12-05 12:57:39.860571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:57.427 [2024-12-05 12:57:39.862415] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:57.427 [2024-12-05 12:57:39.862456] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:57.427 [2024-12-05 12:57:39.862465] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:57.427 [2024-12-05 12:57:39.862476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:57.427 [2024-12-05 12:57:39.862483] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:57.427 [2024-12-05 12:57:39.862507] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:57.427 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.427 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:25:57.427 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:57.427 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:25:57.427 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:57.427 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:57.427 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:57.427 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:57.427 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:57.427 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:57.427 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:57.427 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:57.427 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:57.427 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:57.427 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:57.427 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.427 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.427 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.427 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:57.427 "name": "Existed_Raid", 00:25:57.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.427 "strip_size_kb": 64, 00:25:57.427 "state": "configuring", 00:25:57.427 "raid_level": "raid5f", 00:25:57.427 "superblock": false, 00:25:57.427 "num_base_bdevs": 4, 00:25:57.427 "num_base_bdevs_discovered": 1, 00:25:57.427 "num_base_bdevs_operational": 4, 00:25:57.427 "base_bdevs_list": [ 00:25:57.427 { 00:25:57.427 "name": "BaseBdev1", 00:25:57.427 "uuid": "caf28124-47dd-4146-9805-81460e464fc7", 00:25:57.427 "is_configured": true, 00:25:57.427 "data_offset": 0, 00:25:57.427 "data_size": 65536 00:25:57.427 }, 00:25:57.427 { 00:25:57.427 "name": "BaseBdev2", 00:25:57.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.427 "is_configured": false, 00:25:57.427 "data_offset": 0, 00:25:57.427 "data_size": 0 00:25:57.427 }, 00:25:57.427 { 00:25:57.427 "name": "BaseBdev3", 00:25:57.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.427 "is_configured": false, 00:25:57.427 "data_offset": 0, 00:25:57.427 "data_size": 0 00:25:57.427 }, 00:25:57.427 { 00:25:57.427 "name": "BaseBdev4", 00:25:57.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.427 "is_configured": false, 00:25:57.427 "data_offset": 0, 00:25:57.427 "data_size": 0 00:25:57.427 } 00:25:57.427 ] 00:25:57.427 }' 00:25:57.427 12:57:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:57.427 12:57:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.684 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:57.684 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.684 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.684 [2024-12-05 12:57:40.207253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:57.684 BaseBdev2 00:25:57.684 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.684 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:25:57.684 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:25:57.684 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:57.684 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:57.684 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:57.684 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:57.684 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:57.684 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.684 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.684 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.684 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:57.684 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.684 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.684 [ 00:25:57.684 { 00:25:57.684 "name": "BaseBdev2", 00:25:57.684 "aliases": [ 00:25:57.684 "6a3bf19b-019b-479b-8ad0-690ed17a9d61" 00:25:57.684 ], 00:25:57.684 "product_name": "Malloc disk", 00:25:57.684 "block_size": 512, 00:25:57.684 "num_blocks": 65536, 00:25:57.684 "uuid": "6a3bf19b-019b-479b-8ad0-690ed17a9d61", 00:25:57.684 "assigned_rate_limits": { 00:25:57.684 "rw_ios_per_sec": 0, 00:25:57.684 "rw_mbytes_per_sec": 0, 00:25:57.684 "r_mbytes_per_sec": 0, 00:25:57.684 "w_mbytes_per_sec": 0 00:25:57.684 }, 00:25:57.684 "claimed": true, 00:25:57.684 "claim_type": "exclusive_write", 00:25:57.684 "zoned": false, 00:25:57.684 "supported_io_types": { 00:25:57.684 "read": true, 00:25:57.684 "write": true, 00:25:57.684 "unmap": true, 00:25:57.684 "flush": true, 00:25:57.684 "reset": true, 00:25:57.684 "nvme_admin": false, 00:25:57.684 "nvme_io": false, 00:25:57.684 "nvme_io_md": false, 00:25:57.684 "write_zeroes": true, 00:25:57.684 "zcopy": true, 00:25:57.684 "get_zone_info": false, 00:25:57.684 "zone_management": false, 00:25:57.684 "zone_append": false, 00:25:57.684 "compare": false, 00:25:57.684 "compare_and_write": false, 00:25:57.684 "abort": true, 00:25:57.684 "seek_hole": false, 00:25:57.684 "seek_data": false, 00:25:57.684 "copy": true, 00:25:57.684 "nvme_iov_md": false 00:25:57.684 }, 00:25:57.684 "memory_domains": [ 00:25:57.684 { 00:25:57.684 "dma_device_id": "system", 00:25:57.684 "dma_device_type": 1 00:25:57.684 }, 00:25:57.684 { 00:25:57.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:57.684 "dma_device_type": 2 00:25:57.684 } 00:25:57.684 ], 00:25:57.684 "driver_specific": {} 00:25:57.684 } 00:25:57.684 ] 00:25:57.684 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.684 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:57.684 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:57.684 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:57.684 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:25:57.684 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:57.684 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:57.684 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:57.684 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:57.684 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:57.684 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:57.684 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:57.684 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:57.684 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:57.685 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:57.685 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:57.685 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.685 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.685 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.685 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:57.685 "name": "Existed_Raid", 00:25:57.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.685 "strip_size_kb": 64, 00:25:57.685 "state": "configuring", 00:25:57.685 "raid_level": "raid5f", 00:25:57.685 "superblock": false, 00:25:57.685 "num_base_bdevs": 4, 00:25:57.685 "num_base_bdevs_discovered": 2, 00:25:57.685 "num_base_bdevs_operational": 4, 00:25:57.685 "base_bdevs_list": [ 00:25:57.685 { 00:25:57.685 "name": "BaseBdev1", 00:25:57.685 "uuid": "caf28124-47dd-4146-9805-81460e464fc7", 00:25:57.685 "is_configured": true, 00:25:57.685 "data_offset": 0, 00:25:57.685 "data_size": 65536 00:25:57.685 }, 00:25:57.685 { 00:25:57.685 "name": "BaseBdev2", 00:25:57.685 "uuid": "6a3bf19b-019b-479b-8ad0-690ed17a9d61", 00:25:57.685 "is_configured": true, 00:25:57.685 "data_offset": 0, 00:25:57.685 "data_size": 65536 00:25:57.685 }, 00:25:57.685 { 00:25:57.685 "name": "BaseBdev3", 00:25:57.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.685 "is_configured": false, 00:25:57.685 "data_offset": 0, 00:25:57.685 "data_size": 0 00:25:57.685 }, 00:25:57.685 { 00:25:57.685 "name": "BaseBdev4", 00:25:57.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.685 "is_configured": false, 00:25:57.685 "data_offset": 0, 00:25:57.685 "data_size": 0 00:25:57.685 } 00:25:57.685 ] 00:25:57.685 }' 00:25:57.685 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:57.685 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.247 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:58.247 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.247 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.247 [2024-12-05 12:57:40.604840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:58.247 BaseBdev3 00:25:58.247 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.247 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:25:58.247 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:25:58.247 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:58.247 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:58.248 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:58.248 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:58.248 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:58.248 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.248 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.248 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.248 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:58.248 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.248 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.248 [ 00:25:58.248 { 00:25:58.248 "name": "BaseBdev3", 00:25:58.248 "aliases": [ 00:25:58.248 "395766aa-ce9e-4feb-972a-205047036a67" 00:25:58.248 ], 00:25:58.248 "product_name": "Malloc disk", 00:25:58.248 "block_size": 512, 00:25:58.248 "num_blocks": 65536, 00:25:58.248 "uuid": "395766aa-ce9e-4feb-972a-205047036a67", 00:25:58.248 "assigned_rate_limits": { 00:25:58.248 "rw_ios_per_sec": 0, 00:25:58.248 "rw_mbytes_per_sec": 0, 00:25:58.248 "r_mbytes_per_sec": 0, 00:25:58.248 "w_mbytes_per_sec": 0 00:25:58.248 }, 00:25:58.248 "claimed": true, 00:25:58.248 "claim_type": "exclusive_write", 00:25:58.248 "zoned": false, 00:25:58.248 "supported_io_types": { 00:25:58.248 "read": true, 00:25:58.248 "write": true, 00:25:58.248 "unmap": true, 00:25:58.248 "flush": true, 00:25:58.248 "reset": true, 00:25:58.248 "nvme_admin": false, 00:25:58.248 "nvme_io": false, 00:25:58.248 "nvme_io_md": false, 00:25:58.248 "write_zeroes": true, 00:25:58.248 "zcopy": true, 00:25:58.248 "get_zone_info": false, 00:25:58.248 "zone_management": false, 00:25:58.248 "zone_append": false, 00:25:58.248 "compare": false, 00:25:58.248 "compare_and_write": false, 00:25:58.248 "abort": true, 00:25:58.248 "seek_hole": false, 00:25:58.248 "seek_data": false, 00:25:58.248 "copy": true, 00:25:58.248 "nvme_iov_md": false 00:25:58.248 }, 00:25:58.248 "memory_domains": [ 00:25:58.248 { 00:25:58.248 "dma_device_id": "system", 00:25:58.248 "dma_device_type": 1 00:25:58.248 }, 00:25:58.248 { 00:25:58.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:58.248 "dma_device_type": 2 00:25:58.248 } 00:25:58.248 ], 00:25:58.248 "driver_specific": {} 00:25:58.248 } 00:25:58.248 ] 00:25:58.248 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.248 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:58.248 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:58.248 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:58.248 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:25:58.248 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:58.248 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:58.248 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:58.248 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:58.248 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:58.248 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:58.248 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:58.248 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:58.248 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:58.248 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:58.248 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:58.248 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.248 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.248 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.248 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:58.248 "name": "Existed_Raid", 00:25:58.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:58.248 "strip_size_kb": 64, 00:25:58.248 "state": "configuring", 00:25:58.248 "raid_level": "raid5f", 00:25:58.248 "superblock": false, 00:25:58.248 "num_base_bdevs": 4, 00:25:58.248 "num_base_bdevs_discovered": 3, 00:25:58.248 "num_base_bdevs_operational": 4, 00:25:58.248 "base_bdevs_list": [ 00:25:58.248 { 00:25:58.248 "name": "BaseBdev1", 00:25:58.248 "uuid": "caf28124-47dd-4146-9805-81460e464fc7", 00:25:58.248 "is_configured": true, 00:25:58.248 "data_offset": 0, 00:25:58.248 "data_size": 65536 00:25:58.248 }, 00:25:58.248 { 00:25:58.248 "name": "BaseBdev2", 00:25:58.248 "uuid": "6a3bf19b-019b-479b-8ad0-690ed17a9d61", 00:25:58.248 "is_configured": true, 00:25:58.248 "data_offset": 0, 00:25:58.248 "data_size": 65536 00:25:58.248 }, 00:25:58.248 { 00:25:58.248 "name": "BaseBdev3", 00:25:58.248 "uuid": "395766aa-ce9e-4feb-972a-205047036a67", 00:25:58.248 "is_configured": true, 00:25:58.248 "data_offset": 0, 00:25:58.248 "data_size": 65536 00:25:58.248 }, 00:25:58.248 { 00:25:58.248 "name": "BaseBdev4", 00:25:58.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:58.248 "is_configured": false, 00:25:58.248 "data_offset": 0, 00:25:58.248 "data_size": 0 00:25:58.248 } 00:25:58.248 ] 00:25:58.248 }' 00:25:58.248 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:58.248 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.505 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:25:58.505 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.505 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.505 [2024-12-05 12:57:40.951657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:58.505 [2024-12-05 12:57:40.951846] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:58.505 [2024-12-05 12:57:40.951880] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:25:58.505 [2024-12-05 12:57:40.952660] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:58.505 [2024-12-05 12:57:40.957692] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:58.505 [2024-12-05 12:57:40.957713] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:25:58.505 [2024-12-05 12:57:40.957978] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:58.505 BaseBdev4 00:25:58.505 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.505 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:25:58.505 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:25:58.505 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:58.505 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:58.505 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:58.505 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:58.505 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:58.505 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.505 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.505 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.505 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:58.505 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.505 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.505 [ 00:25:58.505 { 00:25:58.505 "name": "BaseBdev4", 00:25:58.505 "aliases": [ 00:25:58.505 "212f3442-58c2-4c91-b295-83953a308a1a" 00:25:58.505 ], 00:25:58.505 "product_name": "Malloc disk", 00:25:58.505 "block_size": 512, 00:25:58.505 "num_blocks": 65536, 00:25:58.505 "uuid": "212f3442-58c2-4c91-b295-83953a308a1a", 00:25:58.505 "assigned_rate_limits": { 00:25:58.505 "rw_ios_per_sec": 0, 00:25:58.505 "rw_mbytes_per_sec": 0, 00:25:58.505 "r_mbytes_per_sec": 0, 00:25:58.505 "w_mbytes_per_sec": 0 00:25:58.505 }, 00:25:58.505 "claimed": true, 00:25:58.505 "claim_type": "exclusive_write", 00:25:58.505 "zoned": false, 00:25:58.505 "supported_io_types": { 00:25:58.505 "read": true, 00:25:58.505 "write": true, 00:25:58.505 "unmap": true, 00:25:58.505 "flush": true, 00:25:58.505 "reset": true, 00:25:58.505 "nvme_admin": false, 00:25:58.505 "nvme_io": false, 00:25:58.505 "nvme_io_md": false, 00:25:58.505 "write_zeroes": true, 00:25:58.505 "zcopy": true, 00:25:58.505 "get_zone_info": false, 00:25:58.505 "zone_management": false, 00:25:58.505 "zone_append": false, 00:25:58.505 "compare": false, 00:25:58.505 "compare_and_write": false, 00:25:58.505 "abort": true, 00:25:58.505 "seek_hole": false, 00:25:58.505 "seek_data": false, 00:25:58.505 "copy": true, 00:25:58.506 "nvme_iov_md": false 00:25:58.506 }, 00:25:58.506 "memory_domains": [ 00:25:58.506 { 00:25:58.506 "dma_device_id": "system", 00:25:58.506 "dma_device_type": 1 00:25:58.506 }, 00:25:58.506 { 00:25:58.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:58.506 "dma_device_type": 2 00:25:58.506 } 00:25:58.506 ], 00:25:58.506 "driver_specific": {} 00:25:58.506 } 00:25:58.506 ] 00:25:58.506 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.506 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:58.506 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:58.506 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:58.506 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:25:58.506 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:58.506 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:58.506 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:58.506 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:58.506 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:58.506 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:58.506 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:58.506 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:58.506 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:58.506 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:58.506 12:57:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:58.506 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.506 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.506 12:57:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.506 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:58.506 "name": "Existed_Raid", 00:25:58.506 "uuid": "095f2cde-51a0-4d27-9215-174ed4631d26", 00:25:58.506 "strip_size_kb": 64, 00:25:58.506 "state": "online", 00:25:58.506 "raid_level": "raid5f", 00:25:58.506 "superblock": false, 00:25:58.506 "num_base_bdevs": 4, 00:25:58.506 "num_base_bdevs_discovered": 4, 00:25:58.506 "num_base_bdevs_operational": 4, 00:25:58.506 "base_bdevs_list": [ 00:25:58.506 { 00:25:58.506 "name": "BaseBdev1", 00:25:58.506 "uuid": "caf28124-47dd-4146-9805-81460e464fc7", 00:25:58.506 "is_configured": true, 00:25:58.506 "data_offset": 0, 00:25:58.506 "data_size": 65536 00:25:58.506 }, 00:25:58.506 { 00:25:58.506 "name": "BaseBdev2", 00:25:58.506 "uuid": "6a3bf19b-019b-479b-8ad0-690ed17a9d61", 00:25:58.506 "is_configured": true, 00:25:58.506 "data_offset": 0, 00:25:58.506 "data_size": 65536 00:25:58.506 }, 00:25:58.506 { 00:25:58.506 "name": "BaseBdev3", 00:25:58.506 "uuid": "395766aa-ce9e-4feb-972a-205047036a67", 00:25:58.506 "is_configured": true, 00:25:58.506 "data_offset": 0, 00:25:58.506 "data_size": 65536 00:25:58.506 }, 00:25:58.506 { 00:25:58.506 "name": "BaseBdev4", 00:25:58.506 "uuid": "212f3442-58c2-4c91-b295-83953a308a1a", 00:25:58.506 "is_configured": true, 00:25:58.506 "data_offset": 0, 00:25:58.506 "data_size": 65536 00:25:58.506 } 00:25:58.506 ] 00:25:58.506 }' 00:25:58.506 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:58.506 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.762 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:25:58.762 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:58.762 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:58.762 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:58.762 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:58.762 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:58.762 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:58.762 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.762 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:58.762 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.762 [2024-12-05 12:57:41.303516] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:58.762 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.762 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:58.762 "name": "Existed_Raid", 00:25:58.762 "aliases": [ 00:25:58.762 "095f2cde-51a0-4d27-9215-174ed4631d26" 00:25:58.762 ], 00:25:58.762 "product_name": "Raid Volume", 00:25:58.762 "block_size": 512, 00:25:58.762 "num_blocks": 196608, 00:25:58.762 "uuid": "095f2cde-51a0-4d27-9215-174ed4631d26", 00:25:58.762 "assigned_rate_limits": { 00:25:58.762 "rw_ios_per_sec": 0, 00:25:58.762 "rw_mbytes_per_sec": 0, 00:25:58.762 "r_mbytes_per_sec": 0, 00:25:58.762 "w_mbytes_per_sec": 0 00:25:58.762 }, 00:25:58.762 "claimed": false, 00:25:58.762 "zoned": false, 00:25:58.762 "supported_io_types": { 00:25:58.763 "read": true, 00:25:58.763 "write": true, 00:25:58.763 "unmap": false, 00:25:58.763 "flush": false, 00:25:58.763 "reset": true, 00:25:58.763 "nvme_admin": false, 00:25:58.763 "nvme_io": false, 00:25:58.763 "nvme_io_md": false, 00:25:58.763 "write_zeroes": true, 00:25:58.763 "zcopy": false, 00:25:58.763 "get_zone_info": false, 00:25:58.763 "zone_management": false, 00:25:58.763 "zone_append": false, 00:25:58.763 "compare": false, 00:25:58.763 "compare_and_write": false, 00:25:58.763 "abort": false, 00:25:58.763 "seek_hole": false, 00:25:58.763 "seek_data": false, 00:25:58.763 "copy": false, 00:25:58.763 "nvme_iov_md": false 00:25:58.763 }, 00:25:58.763 "driver_specific": { 00:25:58.763 "raid": { 00:25:58.763 "uuid": "095f2cde-51a0-4d27-9215-174ed4631d26", 00:25:58.763 "strip_size_kb": 64, 00:25:58.763 "state": "online", 00:25:58.763 "raid_level": "raid5f", 00:25:58.763 "superblock": false, 00:25:58.763 "num_base_bdevs": 4, 00:25:58.763 "num_base_bdevs_discovered": 4, 00:25:58.763 "num_base_bdevs_operational": 4, 00:25:58.763 "base_bdevs_list": [ 00:25:58.763 { 00:25:58.763 "name": "BaseBdev1", 00:25:58.763 "uuid": "caf28124-47dd-4146-9805-81460e464fc7", 00:25:58.763 "is_configured": true, 00:25:58.763 "data_offset": 0, 00:25:58.763 "data_size": 65536 00:25:58.763 }, 00:25:58.763 { 00:25:58.763 "name": "BaseBdev2", 00:25:58.763 "uuid": "6a3bf19b-019b-479b-8ad0-690ed17a9d61", 00:25:58.763 "is_configured": true, 00:25:58.763 "data_offset": 0, 00:25:58.763 "data_size": 65536 00:25:58.763 }, 00:25:58.763 { 00:25:58.763 "name": "BaseBdev3", 00:25:58.763 "uuid": "395766aa-ce9e-4feb-972a-205047036a67", 00:25:58.763 "is_configured": true, 00:25:58.763 "data_offset": 0, 00:25:58.763 "data_size": 65536 00:25:58.763 }, 00:25:58.763 { 00:25:58.763 "name": "BaseBdev4", 00:25:58.763 "uuid": "212f3442-58c2-4c91-b295-83953a308a1a", 00:25:58.763 "is_configured": true, 00:25:58.763 "data_offset": 0, 00:25:58.763 "data_size": 65536 00:25:58.763 } 00:25:58.763 ] 00:25:58.763 } 00:25:58.763 } 00:25:58.763 }' 00:25:58.763 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:59.019 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:25:59.019 BaseBdev2 00:25:59.019 BaseBdev3 00:25:59.019 BaseBdev4' 00:25:59.019 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:59.019 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:59.019 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:59.019 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:25:59.019 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:59.019 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.020 [2024-12-05 12:57:41.523371] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.020 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.276 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:59.276 "name": "Existed_Raid", 00:25:59.276 "uuid": "095f2cde-51a0-4d27-9215-174ed4631d26", 00:25:59.276 "strip_size_kb": 64, 00:25:59.276 "state": "online", 00:25:59.276 "raid_level": "raid5f", 00:25:59.276 "superblock": false, 00:25:59.276 "num_base_bdevs": 4, 00:25:59.276 "num_base_bdevs_discovered": 3, 00:25:59.276 "num_base_bdevs_operational": 3, 00:25:59.276 "base_bdevs_list": [ 00:25:59.276 { 00:25:59.276 "name": null, 00:25:59.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:59.276 "is_configured": false, 00:25:59.276 "data_offset": 0, 00:25:59.276 "data_size": 65536 00:25:59.276 }, 00:25:59.276 { 00:25:59.276 "name": "BaseBdev2", 00:25:59.276 "uuid": "6a3bf19b-019b-479b-8ad0-690ed17a9d61", 00:25:59.276 "is_configured": true, 00:25:59.276 "data_offset": 0, 00:25:59.276 "data_size": 65536 00:25:59.276 }, 00:25:59.276 { 00:25:59.276 "name": "BaseBdev3", 00:25:59.276 "uuid": "395766aa-ce9e-4feb-972a-205047036a67", 00:25:59.276 "is_configured": true, 00:25:59.276 "data_offset": 0, 00:25:59.276 "data_size": 65536 00:25:59.276 }, 00:25:59.276 { 00:25:59.276 "name": "BaseBdev4", 00:25:59.276 "uuid": "212f3442-58c2-4c91-b295-83953a308a1a", 00:25:59.276 "is_configured": true, 00:25:59.276 "data_offset": 0, 00:25:59.276 "data_size": 65536 00:25:59.276 } 00:25:59.276 ] 00:25:59.276 }' 00:25:59.276 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:59.276 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.533 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:25:59.533 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:59.533 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:59.533 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:59.534 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.534 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.534 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.534 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:59.534 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:59.534 12:57:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:25:59.534 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.534 12:57:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.534 [2024-12-05 12:57:41.941638] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:59.534 [2024-12-05 12:57:41.941730] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:59.534 [2024-12-05 12:57:42.000924] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:59.534 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.534 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:59.534 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:59.534 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:59.534 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:59.534 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.534 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.534 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.534 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:59.534 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:59.534 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:25:59.534 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.534 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.534 [2024-12-05 12:57:42.040968] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:59.534 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.534 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:59.534 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:59.534 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:59.534 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:59.534 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.534 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.791 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.791 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:59.791 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:59.791 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:25:59.791 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.791 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.791 [2024-12-05 12:57:42.140101] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:25:59.791 [2024-12-05 12:57:42.140240] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:25:59.791 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.791 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:59.791 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:59.791 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:59.791 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.791 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.791 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:25:59.791 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.791 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:25:59.791 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:25:59.791 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:25:59.791 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:25:59.791 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:59.791 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:59.791 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.791 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.791 BaseBdev2 00:25:59.791 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.791 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:25:59.791 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:25:59.791 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:59.791 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:59.791 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:59.791 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:59.791 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:59.791 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.791 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.791 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.791 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:59.791 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.791 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.791 [ 00:25:59.791 { 00:25:59.791 "name": "BaseBdev2", 00:25:59.791 "aliases": [ 00:25:59.791 "66ae4e3c-c049-40bb-a51f-17d39a4569e6" 00:25:59.791 ], 00:25:59.791 "product_name": "Malloc disk", 00:25:59.791 "block_size": 512, 00:25:59.791 "num_blocks": 65536, 00:25:59.791 "uuid": "66ae4e3c-c049-40bb-a51f-17d39a4569e6", 00:25:59.791 "assigned_rate_limits": { 00:25:59.791 "rw_ios_per_sec": 0, 00:25:59.791 "rw_mbytes_per_sec": 0, 00:25:59.791 "r_mbytes_per_sec": 0, 00:25:59.791 "w_mbytes_per_sec": 0 00:25:59.791 }, 00:25:59.791 "claimed": false, 00:25:59.791 "zoned": false, 00:25:59.791 "supported_io_types": { 00:25:59.791 "read": true, 00:25:59.792 "write": true, 00:25:59.792 "unmap": true, 00:25:59.792 "flush": true, 00:25:59.792 "reset": true, 00:25:59.792 "nvme_admin": false, 00:25:59.792 "nvme_io": false, 00:25:59.792 "nvme_io_md": false, 00:25:59.792 "write_zeroes": true, 00:25:59.792 "zcopy": true, 00:25:59.792 "get_zone_info": false, 00:25:59.792 "zone_management": false, 00:25:59.792 "zone_append": false, 00:25:59.792 "compare": false, 00:25:59.792 "compare_and_write": false, 00:25:59.792 "abort": true, 00:25:59.792 "seek_hole": false, 00:25:59.792 "seek_data": false, 00:25:59.792 "copy": true, 00:25:59.792 "nvme_iov_md": false 00:25:59.792 }, 00:25:59.792 "memory_domains": [ 00:25:59.792 { 00:25:59.792 "dma_device_id": "system", 00:25:59.792 "dma_device_type": 1 00:25:59.792 }, 00:25:59.792 { 00:25:59.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:59.792 "dma_device_type": 2 00:25:59.792 } 00:25:59.792 ], 00:25:59.792 "driver_specific": {} 00:25:59.792 } 00:25:59.792 ] 00:25:59.792 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.792 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:59.792 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:59.792 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:59.792 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:59.792 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.792 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.792 BaseBdev3 00:25:59.792 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.792 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:25:59.792 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:25:59.792 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:59.792 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:59.792 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:59.792 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:59.792 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:59.792 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.792 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.792 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.792 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:59.792 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.792 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.792 [ 00:25:59.792 { 00:25:59.792 "name": "BaseBdev3", 00:25:59.792 "aliases": [ 00:25:59.792 "bf9798ba-6e43-4f41-a5bf-cb19d8d804c8" 00:25:59.792 ], 00:25:59.792 "product_name": "Malloc disk", 00:25:59.792 "block_size": 512, 00:25:59.792 "num_blocks": 65536, 00:25:59.792 "uuid": "bf9798ba-6e43-4f41-a5bf-cb19d8d804c8", 00:25:59.792 "assigned_rate_limits": { 00:25:59.792 "rw_ios_per_sec": 0, 00:25:59.792 "rw_mbytes_per_sec": 0, 00:25:59.792 "r_mbytes_per_sec": 0, 00:25:59.792 "w_mbytes_per_sec": 0 00:25:59.792 }, 00:25:59.792 "claimed": false, 00:25:59.792 "zoned": false, 00:25:59.792 "supported_io_types": { 00:25:59.792 "read": true, 00:25:59.792 "write": true, 00:25:59.792 "unmap": true, 00:25:59.792 "flush": true, 00:25:59.792 "reset": true, 00:25:59.792 "nvme_admin": false, 00:25:59.792 "nvme_io": false, 00:25:59.792 "nvme_io_md": false, 00:25:59.792 "write_zeroes": true, 00:25:59.792 "zcopy": true, 00:25:59.792 "get_zone_info": false, 00:25:59.792 "zone_management": false, 00:25:59.792 "zone_append": false, 00:25:59.792 "compare": false, 00:25:59.792 "compare_and_write": false, 00:25:59.792 "abort": true, 00:25:59.792 "seek_hole": false, 00:25:59.792 "seek_data": false, 00:25:59.792 "copy": true, 00:25:59.792 "nvme_iov_md": false 00:25:59.792 }, 00:25:59.792 "memory_domains": [ 00:25:59.792 { 00:25:59.792 "dma_device_id": "system", 00:25:59.792 "dma_device_type": 1 00:25:59.792 }, 00:25:59.792 { 00:25:59.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:59.792 "dma_device_type": 2 00:25:59.792 } 00:25:59.792 ], 00:25:59.792 "driver_specific": {} 00:25:59.792 } 00:25:59.792 ] 00:25:59.792 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.792 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:59.792 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:59.792 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:59.792 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:25:59.792 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.792 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.071 BaseBdev4 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.071 [ 00:26:00.071 { 00:26:00.071 "name": "BaseBdev4", 00:26:00.071 "aliases": [ 00:26:00.071 "12f71418-4f4e-4cb2-83eb-fd00770fb7e7" 00:26:00.071 ], 00:26:00.071 "product_name": "Malloc disk", 00:26:00.071 "block_size": 512, 00:26:00.071 "num_blocks": 65536, 00:26:00.071 "uuid": "12f71418-4f4e-4cb2-83eb-fd00770fb7e7", 00:26:00.071 "assigned_rate_limits": { 00:26:00.071 "rw_ios_per_sec": 0, 00:26:00.071 "rw_mbytes_per_sec": 0, 00:26:00.071 "r_mbytes_per_sec": 0, 00:26:00.071 "w_mbytes_per_sec": 0 00:26:00.071 }, 00:26:00.071 "claimed": false, 00:26:00.071 "zoned": false, 00:26:00.071 "supported_io_types": { 00:26:00.071 "read": true, 00:26:00.071 "write": true, 00:26:00.071 "unmap": true, 00:26:00.071 "flush": true, 00:26:00.071 "reset": true, 00:26:00.071 "nvme_admin": false, 00:26:00.071 "nvme_io": false, 00:26:00.071 "nvme_io_md": false, 00:26:00.071 "write_zeroes": true, 00:26:00.071 "zcopy": true, 00:26:00.071 "get_zone_info": false, 00:26:00.071 "zone_management": false, 00:26:00.071 "zone_append": false, 00:26:00.071 "compare": false, 00:26:00.071 "compare_and_write": false, 00:26:00.071 "abort": true, 00:26:00.071 "seek_hole": false, 00:26:00.071 "seek_data": false, 00:26:00.071 "copy": true, 00:26:00.071 "nvme_iov_md": false 00:26:00.071 }, 00:26:00.071 "memory_domains": [ 00:26:00.071 { 00:26:00.071 "dma_device_id": "system", 00:26:00.071 "dma_device_type": 1 00:26:00.071 }, 00:26:00.071 { 00:26:00.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:00.071 "dma_device_type": 2 00:26:00.071 } 00:26:00.071 ], 00:26:00.071 "driver_specific": {} 00:26:00.071 } 00:26:00.071 ] 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.071 [2024-12-05 12:57:42.407079] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:00.071 [2024-12-05 12:57:42.407216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:00.071 [2024-12-05 12:57:42.407291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:00.071 [2024-12-05 12:57:42.409145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:00.071 [2024-12-05 12:57:42.409275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:00.071 "name": "Existed_Raid", 00:26:00.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:00.071 "strip_size_kb": 64, 00:26:00.071 "state": "configuring", 00:26:00.071 "raid_level": "raid5f", 00:26:00.071 "superblock": false, 00:26:00.071 "num_base_bdevs": 4, 00:26:00.071 "num_base_bdevs_discovered": 3, 00:26:00.071 "num_base_bdevs_operational": 4, 00:26:00.071 "base_bdevs_list": [ 00:26:00.071 { 00:26:00.071 "name": "BaseBdev1", 00:26:00.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:00.071 "is_configured": false, 00:26:00.071 "data_offset": 0, 00:26:00.071 "data_size": 0 00:26:00.071 }, 00:26:00.071 { 00:26:00.071 "name": "BaseBdev2", 00:26:00.071 "uuid": "66ae4e3c-c049-40bb-a51f-17d39a4569e6", 00:26:00.071 "is_configured": true, 00:26:00.071 "data_offset": 0, 00:26:00.071 "data_size": 65536 00:26:00.071 }, 00:26:00.071 { 00:26:00.071 "name": "BaseBdev3", 00:26:00.071 "uuid": "bf9798ba-6e43-4f41-a5bf-cb19d8d804c8", 00:26:00.071 "is_configured": true, 00:26:00.071 "data_offset": 0, 00:26:00.071 "data_size": 65536 00:26:00.071 }, 00:26:00.071 { 00:26:00.071 "name": "BaseBdev4", 00:26:00.071 "uuid": "12f71418-4f4e-4cb2-83eb-fd00770fb7e7", 00:26:00.071 "is_configured": true, 00:26:00.071 "data_offset": 0, 00:26:00.071 "data_size": 65536 00:26:00.071 } 00:26:00.071 ] 00:26:00.071 }' 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:00.071 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.329 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:26:00.329 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.329 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.329 [2024-12-05 12:57:42.739152] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:00.329 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.329 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:00.329 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:00.329 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:00.329 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:00.329 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:00.329 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:00.329 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:00.329 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:00.329 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:00.329 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:00.329 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:00.329 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:00.329 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.329 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.329 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.329 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:00.329 "name": "Existed_Raid", 00:26:00.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:00.329 "strip_size_kb": 64, 00:26:00.329 "state": "configuring", 00:26:00.329 "raid_level": "raid5f", 00:26:00.329 "superblock": false, 00:26:00.329 "num_base_bdevs": 4, 00:26:00.329 "num_base_bdevs_discovered": 2, 00:26:00.329 "num_base_bdevs_operational": 4, 00:26:00.329 "base_bdevs_list": [ 00:26:00.329 { 00:26:00.329 "name": "BaseBdev1", 00:26:00.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:00.329 "is_configured": false, 00:26:00.329 "data_offset": 0, 00:26:00.329 "data_size": 0 00:26:00.329 }, 00:26:00.329 { 00:26:00.329 "name": null, 00:26:00.329 "uuid": "66ae4e3c-c049-40bb-a51f-17d39a4569e6", 00:26:00.329 "is_configured": false, 00:26:00.329 "data_offset": 0, 00:26:00.329 "data_size": 65536 00:26:00.329 }, 00:26:00.329 { 00:26:00.329 "name": "BaseBdev3", 00:26:00.329 "uuid": "bf9798ba-6e43-4f41-a5bf-cb19d8d804c8", 00:26:00.329 "is_configured": true, 00:26:00.329 "data_offset": 0, 00:26:00.329 "data_size": 65536 00:26:00.329 }, 00:26:00.329 { 00:26:00.329 "name": "BaseBdev4", 00:26:00.329 "uuid": "12f71418-4f4e-4cb2-83eb-fd00770fb7e7", 00:26:00.329 "is_configured": true, 00:26:00.329 "data_offset": 0, 00:26:00.329 "data_size": 65536 00:26:00.329 } 00:26:00.329 ] 00:26:00.329 }' 00:26:00.329 12:57:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:00.329 12:57:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.587 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:00.587 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.587 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.587 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:00.587 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.587 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:26:00.587 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:00.587 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.587 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.587 [2024-12-05 12:57:43.129486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:00.587 BaseBdev1 00:26:00.587 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.587 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:26:00.587 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:26:00.587 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:00.587 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:26:00.587 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:00.587 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:00.587 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:00.587 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.587 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.587 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.587 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:00.587 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.587 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.587 [ 00:26:00.587 { 00:26:00.587 "name": "BaseBdev1", 00:26:00.587 "aliases": [ 00:26:00.587 "3f6cf7bf-de0d-423d-a38c-60a2fb6dc65b" 00:26:00.587 ], 00:26:00.587 "product_name": "Malloc disk", 00:26:00.587 "block_size": 512, 00:26:00.587 "num_blocks": 65536, 00:26:00.587 "uuid": "3f6cf7bf-de0d-423d-a38c-60a2fb6dc65b", 00:26:00.587 "assigned_rate_limits": { 00:26:00.587 "rw_ios_per_sec": 0, 00:26:00.587 "rw_mbytes_per_sec": 0, 00:26:00.587 "r_mbytes_per_sec": 0, 00:26:00.587 "w_mbytes_per_sec": 0 00:26:00.587 }, 00:26:00.587 "claimed": true, 00:26:00.587 "claim_type": "exclusive_write", 00:26:00.587 "zoned": false, 00:26:00.587 "supported_io_types": { 00:26:00.587 "read": true, 00:26:00.587 "write": true, 00:26:00.587 "unmap": true, 00:26:00.587 "flush": true, 00:26:00.587 "reset": true, 00:26:00.588 "nvme_admin": false, 00:26:00.588 "nvme_io": false, 00:26:00.588 "nvme_io_md": false, 00:26:00.588 "write_zeroes": true, 00:26:00.588 "zcopy": true, 00:26:00.588 "get_zone_info": false, 00:26:00.588 "zone_management": false, 00:26:00.588 "zone_append": false, 00:26:00.588 "compare": false, 00:26:00.588 "compare_and_write": false, 00:26:00.588 "abort": true, 00:26:00.588 "seek_hole": false, 00:26:00.588 "seek_data": false, 00:26:00.588 "copy": true, 00:26:00.588 "nvme_iov_md": false 00:26:00.588 }, 00:26:00.588 "memory_domains": [ 00:26:00.588 { 00:26:00.588 "dma_device_id": "system", 00:26:00.588 "dma_device_type": 1 00:26:00.588 }, 00:26:00.588 { 00:26:00.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:00.588 "dma_device_type": 2 00:26:00.588 } 00:26:00.588 ], 00:26:00.588 "driver_specific": {} 00:26:00.588 } 00:26:00.588 ] 00:26:00.588 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.588 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:26:00.588 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:00.588 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:00.588 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:00.588 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:00.588 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:00.588 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:00.588 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:00.588 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:00.588 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:00.588 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:00.588 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:00.588 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.588 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.588 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:00.846 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.846 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:00.846 "name": "Existed_Raid", 00:26:00.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:00.846 "strip_size_kb": 64, 00:26:00.846 "state": "configuring", 00:26:00.846 "raid_level": "raid5f", 00:26:00.846 "superblock": false, 00:26:00.846 "num_base_bdevs": 4, 00:26:00.846 "num_base_bdevs_discovered": 3, 00:26:00.846 "num_base_bdevs_operational": 4, 00:26:00.846 "base_bdevs_list": [ 00:26:00.846 { 00:26:00.846 "name": "BaseBdev1", 00:26:00.846 "uuid": "3f6cf7bf-de0d-423d-a38c-60a2fb6dc65b", 00:26:00.846 "is_configured": true, 00:26:00.846 "data_offset": 0, 00:26:00.846 "data_size": 65536 00:26:00.846 }, 00:26:00.846 { 00:26:00.846 "name": null, 00:26:00.846 "uuid": "66ae4e3c-c049-40bb-a51f-17d39a4569e6", 00:26:00.846 "is_configured": false, 00:26:00.846 "data_offset": 0, 00:26:00.846 "data_size": 65536 00:26:00.846 }, 00:26:00.846 { 00:26:00.846 "name": "BaseBdev3", 00:26:00.846 "uuid": "bf9798ba-6e43-4f41-a5bf-cb19d8d804c8", 00:26:00.846 "is_configured": true, 00:26:00.846 "data_offset": 0, 00:26:00.846 "data_size": 65536 00:26:00.846 }, 00:26:00.846 { 00:26:00.846 "name": "BaseBdev4", 00:26:00.846 "uuid": "12f71418-4f4e-4cb2-83eb-fd00770fb7e7", 00:26:00.846 "is_configured": true, 00:26:00.846 "data_offset": 0, 00:26:00.846 "data_size": 65536 00:26:00.846 } 00:26:00.846 ] 00:26:00.846 }' 00:26:00.846 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:00.846 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.104 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:01.104 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.104 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.105 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:01.105 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.105 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:26:01.105 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:26:01.105 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.105 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.105 [2024-12-05 12:57:43.521661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:01.105 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.105 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:01.105 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:01.105 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:01.105 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:01.105 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:01.105 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:01.105 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:01.105 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:01.105 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:01.105 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:01.105 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:01.105 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.105 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.105 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:01.105 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.105 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:01.105 "name": "Existed_Raid", 00:26:01.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:01.105 "strip_size_kb": 64, 00:26:01.105 "state": "configuring", 00:26:01.105 "raid_level": "raid5f", 00:26:01.105 "superblock": false, 00:26:01.105 "num_base_bdevs": 4, 00:26:01.105 "num_base_bdevs_discovered": 2, 00:26:01.105 "num_base_bdevs_operational": 4, 00:26:01.105 "base_bdevs_list": [ 00:26:01.105 { 00:26:01.105 "name": "BaseBdev1", 00:26:01.105 "uuid": "3f6cf7bf-de0d-423d-a38c-60a2fb6dc65b", 00:26:01.105 "is_configured": true, 00:26:01.105 "data_offset": 0, 00:26:01.105 "data_size": 65536 00:26:01.105 }, 00:26:01.105 { 00:26:01.105 "name": null, 00:26:01.105 "uuid": "66ae4e3c-c049-40bb-a51f-17d39a4569e6", 00:26:01.105 "is_configured": false, 00:26:01.105 "data_offset": 0, 00:26:01.105 "data_size": 65536 00:26:01.105 }, 00:26:01.105 { 00:26:01.105 "name": null, 00:26:01.105 "uuid": "bf9798ba-6e43-4f41-a5bf-cb19d8d804c8", 00:26:01.105 "is_configured": false, 00:26:01.105 "data_offset": 0, 00:26:01.105 "data_size": 65536 00:26:01.105 }, 00:26:01.105 { 00:26:01.105 "name": "BaseBdev4", 00:26:01.105 "uuid": "12f71418-4f4e-4cb2-83eb-fd00770fb7e7", 00:26:01.105 "is_configured": true, 00:26:01.105 "data_offset": 0, 00:26:01.105 "data_size": 65536 00:26:01.105 } 00:26:01.105 ] 00:26:01.105 }' 00:26:01.105 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:01.105 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.364 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:01.364 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:01.364 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.364 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.364 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.364 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:26:01.364 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:26:01.364 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.364 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.364 [2024-12-05 12:57:43.865737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:01.364 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.364 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:01.364 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:01.364 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:01.364 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:01.364 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:01.364 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:01.364 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:01.364 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:01.364 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:01.364 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:01.364 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:01.364 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.364 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.364 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:01.364 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.364 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:01.364 "name": "Existed_Raid", 00:26:01.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:01.364 "strip_size_kb": 64, 00:26:01.364 "state": "configuring", 00:26:01.364 "raid_level": "raid5f", 00:26:01.364 "superblock": false, 00:26:01.364 "num_base_bdevs": 4, 00:26:01.364 "num_base_bdevs_discovered": 3, 00:26:01.364 "num_base_bdevs_operational": 4, 00:26:01.364 "base_bdevs_list": [ 00:26:01.364 { 00:26:01.364 "name": "BaseBdev1", 00:26:01.364 "uuid": "3f6cf7bf-de0d-423d-a38c-60a2fb6dc65b", 00:26:01.364 "is_configured": true, 00:26:01.364 "data_offset": 0, 00:26:01.364 "data_size": 65536 00:26:01.364 }, 00:26:01.364 { 00:26:01.364 "name": null, 00:26:01.364 "uuid": "66ae4e3c-c049-40bb-a51f-17d39a4569e6", 00:26:01.364 "is_configured": false, 00:26:01.364 "data_offset": 0, 00:26:01.364 "data_size": 65536 00:26:01.364 }, 00:26:01.364 { 00:26:01.364 "name": "BaseBdev3", 00:26:01.364 "uuid": "bf9798ba-6e43-4f41-a5bf-cb19d8d804c8", 00:26:01.364 "is_configured": true, 00:26:01.364 "data_offset": 0, 00:26:01.364 "data_size": 65536 00:26:01.364 }, 00:26:01.364 { 00:26:01.364 "name": "BaseBdev4", 00:26:01.364 "uuid": "12f71418-4f4e-4cb2-83eb-fd00770fb7e7", 00:26:01.364 "is_configured": true, 00:26:01.364 "data_offset": 0, 00:26:01.364 "data_size": 65536 00:26:01.364 } 00:26:01.364 ] 00:26:01.364 }' 00:26:01.364 12:57:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:01.364 12:57:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.622 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:01.622 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:01.622 12:57:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.622 12:57:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.622 12:57:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.622 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:26:01.622 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:01.622 12:57:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.622 12:57:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.880 [2024-12-05 12:57:44.209863] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:01.880 12:57:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.880 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:01.880 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:01.880 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:01.880 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:01.880 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:01.880 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:01.880 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:01.880 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:01.880 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:01.880 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:01.880 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:01.880 12:57:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.880 12:57:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.880 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:01.880 12:57:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.880 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:01.880 "name": "Existed_Raid", 00:26:01.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:01.880 "strip_size_kb": 64, 00:26:01.880 "state": "configuring", 00:26:01.880 "raid_level": "raid5f", 00:26:01.880 "superblock": false, 00:26:01.880 "num_base_bdevs": 4, 00:26:01.880 "num_base_bdevs_discovered": 2, 00:26:01.880 "num_base_bdevs_operational": 4, 00:26:01.880 "base_bdevs_list": [ 00:26:01.880 { 00:26:01.880 "name": null, 00:26:01.880 "uuid": "3f6cf7bf-de0d-423d-a38c-60a2fb6dc65b", 00:26:01.880 "is_configured": false, 00:26:01.880 "data_offset": 0, 00:26:01.880 "data_size": 65536 00:26:01.880 }, 00:26:01.880 { 00:26:01.880 "name": null, 00:26:01.880 "uuid": "66ae4e3c-c049-40bb-a51f-17d39a4569e6", 00:26:01.880 "is_configured": false, 00:26:01.880 "data_offset": 0, 00:26:01.880 "data_size": 65536 00:26:01.880 }, 00:26:01.880 { 00:26:01.880 "name": "BaseBdev3", 00:26:01.880 "uuid": "bf9798ba-6e43-4f41-a5bf-cb19d8d804c8", 00:26:01.880 "is_configured": true, 00:26:01.880 "data_offset": 0, 00:26:01.880 "data_size": 65536 00:26:01.880 }, 00:26:01.880 { 00:26:01.880 "name": "BaseBdev4", 00:26:01.880 "uuid": "12f71418-4f4e-4cb2-83eb-fd00770fb7e7", 00:26:01.880 "is_configured": true, 00:26:01.880 "data_offset": 0, 00:26:01.880 "data_size": 65536 00:26:01.880 } 00:26:01.880 ] 00:26:01.880 }' 00:26:01.880 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:01.880 12:57:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.138 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:02.138 12:57:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.138 12:57:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.138 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:02.138 12:57:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.138 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:26:02.138 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:02.138 12:57:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.138 12:57:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.138 [2024-12-05 12:57:44.612775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:02.138 12:57:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.138 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:02.138 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:02.138 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:02.138 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:02.138 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:02.138 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:02.138 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:02.138 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:02.138 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:02.138 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:02.138 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:02.138 12:57:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.138 12:57:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.138 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:02.138 12:57:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.138 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:02.139 "name": "Existed_Raid", 00:26:02.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:02.139 "strip_size_kb": 64, 00:26:02.139 "state": "configuring", 00:26:02.139 "raid_level": "raid5f", 00:26:02.139 "superblock": false, 00:26:02.139 "num_base_bdevs": 4, 00:26:02.139 "num_base_bdevs_discovered": 3, 00:26:02.139 "num_base_bdevs_operational": 4, 00:26:02.139 "base_bdevs_list": [ 00:26:02.139 { 00:26:02.139 "name": null, 00:26:02.139 "uuid": "3f6cf7bf-de0d-423d-a38c-60a2fb6dc65b", 00:26:02.139 "is_configured": false, 00:26:02.139 "data_offset": 0, 00:26:02.139 "data_size": 65536 00:26:02.139 }, 00:26:02.139 { 00:26:02.139 "name": "BaseBdev2", 00:26:02.139 "uuid": "66ae4e3c-c049-40bb-a51f-17d39a4569e6", 00:26:02.139 "is_configured": true, 00:26:02.139 "data_offset": 0, 00:26:02.139 "data_size": 65536 00:26:02.139 }, 00:26:02.139 { 00:26:02.139 "name": "BaseBdev3", 00:26:02.139 "uuid": "bf9798ba-6e43-4f41-a5bf-cb19d8d804c8", 00:26:02.139 "is_configured": true, 00:26:02.139 "data_offset": 0, 00:26:02.139 "data_size": 65536 00:26:02.139 }, 00:26:02.139 { 00:26:02.139 "name": "BaseBdev4", 00:26:02.139 "uuid": "12f71418-4f4e-4cb2-83eb-fd00770fb7e7", 00:26:02.139 "is_configured": true, 00:26:02.139 "data_offset": 0, 00:26:02.139 "data_size": 65536 00:26:02.139 } 00:26:02.139 ] 00:26:02.139 }' 00:26:02.139 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:02.139 12:57:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.397 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:02.397 12:57:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.397 12:57:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.397 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:02.397 12:57:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.397 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:26:02.397 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:02.397 12:57:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:02.397 12:57:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.397 12:57:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.747 12:57:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.747 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3f6cf7bf-de0d-423d-a38c-60a2fb6dc65b 00:26:02.747 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.747 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.747 [2024-12-05 12:57:45.031721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:02.747 [2024-12-05 12:57:45.031944] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:26:02.747 [2024-12-05 12:57:45.031959] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:26:02.747 [2024-12-05 12:57:45.032226] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:26:02.747 [2024-12-05 12:57:45.036940] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:26:02.747 [2024-12-05 12:57:45.036963] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:26:02.747 [2024-12-05 12:57:45.037218] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:02.747 NewBaseBdev 00:26:02.747 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.747 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:26:02.747 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:26:02.747 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:02.747 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:26:02.747 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:02.747 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:02.747 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:02.747 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.747 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.747 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.747 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:02.747 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.747 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.747 [ 00:26:02.747 { 00:26:02.747 "name": "NewBaseBdev", 00:26:02.747 "aliases": [ 00:26:02.747 "3f6cf7bf-de0d-423d-a38c-60a2fb6dc65b" 00:26:02.747 ], 00:26:02.747 "product_name": "Malloc disk", 00:26:02.747 "block_size": 512, 00:26:02.747 "num_blocks": 65536, 00:26:02.747 "uuid": "3f6cf7bf-de0d-423d-a38c-60a2fb6dc65b", 00:26:02.747 "assigned_rate_limits": { 00:26:02.747 "rw_ios_per_sec": 0, 00:26:02.747 "rw_mbytes_per_sec": 0, 00:26:02.747 "r_mbytes_per_sec": 0, 00:26:02.747 "w_mbytes_per_sec": 0 00:26:02.747 }, 00:26:02.747 "claimed": true, 00:26:02.747 "claim_type": "exclusive_write", 00:26:02.747 "zoned": false, 00:26:02.747 "supported_io_types": { 00:26:02.747 "read": true, 00:26:02.747 "write": true, 00:26:02.747 "unmap": true, 00:26:02.747 "flush": true, 00:26:02.747 "reset": true, 00:26:02.747 "nvme_admin": false, 00:26:02.747 "nvme_io": false, 00:26:02.747 "nvme_io_md": false, 00:26:02.747 "write_zeroes": true, 00:26:02.747 "zcopy": true, 00:26:02.747 "get_zone_info": false, 00:26:02.747 "zone_management": false, 00:26:02.747 "zone_append": false, 00:26:02.747 "compare": false, 00:26:02.747 "compare_and_write": false, 00:26:02.747 "abort": true, 00:26:02.747 "seek_hole": false, 00:26:02.747 "seek_data": false, 00:26:02.747 "copy": true, 00:26:02.747 "nvme_iov_md": false 00:26:02.747 }, 00:26:02.747 "memory_domains": [ 00:26:02.747 { 00:26:02.747 "dma_device_id": "system", 00:26:02.747 "dma_device_type": 1 00:26:02.747 }, 00:26:02.747 { 00:26:02.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:02.747 "dma_device_type": 2 00:26:02.747 } 00:26:02.747 ], 00:26:02.747 "driver_specific": {} 00:26:02.747 } 00:26:02.747 ] 00:26:02.747 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.747 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:26:02.747 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:26:02.747 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:02.747 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:02.747 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:02.747 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:02.747 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:02.747 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:02.747 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:02.747 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:02.747 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:02.747 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:02.747 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:02.747 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.747 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.747 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.747 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:02.747 "name": "Existed_Raid", 00:26:02.747 "uuid": "d1190965-c79e-4d54-8332-5c2d6520e67f", 00:26:02.747 "strip_size_kb": 64, 00:26:02.747 "state": "online", 00:26:02.747 "raid_level": "raid5f", 00:26:02.747 "superblock": false, 00:26:02.747 "num_base_bdevs": 4, 00:26:02.747 "num_base_bdevs_discovered": 4, 00:26:02.747 "num_base_bdevs_operational": 4, 00:26:02.747 "base_bdevs_list": [ 00:26:02.747 { 00:26:02.747 "name": "NewBaseBdev", 00:26:02.747 "uuid": "3f6cf7bf-de0d-423d-a38c-60a2fb6dc65b", 00:26:02.747 "is_configured": true, 00:26:02.747 "data_offset": 0, 00:26:02.747 "data_size": 65536 00:26:02.747 }, 00:26:02.747 { 00:26:02.747 "name": "BaseBdev2", 00:26:02.747 "uuid": "66ae4e3c-c049-40bb-a51f-17d39a4569e6", 00:26:02.747 "is_configured": true, 00:26:02.747 "data_offset": 0, 00:26:02.747 "data_size": 65536 00:26:02.747 }, 00:26:02.748 { 00:26:02.748 "name": "BaseBdev3", 00:26:02.748 "uuid": "bf9798ba-6e43-4f41-a5bf-cb19d8d804c8", 00:26:02.748 "is_configured": true, 00:26:02.748 "data_offset": 0, 00:26:02.748 "data_size": 65536 00:26:02.748 }, 00:26:02.748 { 00:26:02.748 "name": "BaseBdev4", 00:26:02.748 "uuid": "12f71418-4f4e-4cb2-83eb-fd00770fb7e7", 00:26:02.748 "is_configured": true, 00:26:02.748 "data_offset": 0, 00:26:02.748 "data_size": 65536 00:26:02.748 } 00:26:02.748 ] 00:26:02.748 }' 00:26:02.748 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:02.748 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.006 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:26:03.006 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:03.006 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:03.006 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:03.006 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:03.006 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:03.006 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:03.006 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.006 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.006 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:03.006 [2024-12-05 12:57:45.394903] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:03.006 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.006 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:03.006 "name": "Existed_Raid", 00:26:03.006 "aliases": [ 00:26:03.006 "d1190965-c79e-4d54-8332-5c2d6520e67f" 00:26:03.006 ], 00:26:03.006 "product_name": "Raid Volume", 00:26:03.006 "block_size": 512, 00:26:03.006 "num_blocks": 196608, 00:26:03.006 "uuid": "d1190965-c79e-4d54-8332-5c2d6520e67f", 00:26:03.006 "assigned_rate_limits": { 00:26:03.006 "rw_ios_per_sec": 0, 00:26:03.006 "rw_mbytes_per_sec": 0, 00:26:03.006 "r_mbytes_per_sec": 0, 00:26:03.006 "w_mbytes_per_sec": 0 00:26:03.006 }, 00:26:03.006 "claimed": false, 00:26:03.006 "zoned": false, 00:26:03.006 "supported_io_types": { 00:26:03.006 "read": true, 00:26:03.006 "write": true, 00:26:03.006 "unmap": false, 00:26:03.006 "flush": false, 00:26:03.006 "reset": true, 00:26:03.007 "nvme_admin": false, 00:26:03.007 "nvme_io": false, 00:26:03.007 "nvme_io_md": false, 00:26:03.007 "write_zeroes": true, 00:26:03.007 "zcopy": false, 00:26:03.007 "get_zone_info": false, 00:26:03.007 "zone_management": false, 00:26:03.007 "zone_append": false, 00:26:03.007 "compare": false, 00:26:03.007 "compare_and_write": false, 00:26:03.007 "abort": false, 00:26:03.007 "seek_hole": false, 00:26:03.007 "seek_data": false, 00:26:03.007 "copy": false, 00:26:03.007 "nvme_iov_md": false 00:26:03.007 }, 00:26:03.007 "driver_specific": { 00:26:03.007 "raid": { 00:26:03.007 "uuid": "d1190965-c79e-4d54-8332-5c2d6520e67f", 00:26:03.007 "strip_size_kb": 64, 00:26:03.007 "state": "online", 00:26:03.007 "raid_level": "raid5f", 00:26:03.007 "superblock": false, 00:26:03.007 "num_base_bdevs": 4, 00:26:03.007 "num_base_bdevs_discovered": 4, 00:26:03.007 "num_base_bdevs_operational": 4, 00:26:03.007 "base_bdevs_list": [ 00:26:03.007 { 00:26:03.007 "name": "NewBaseBdev", 00:26:03.007 "uuid": "3f6cf7bf-de0d-423d-a38c-60a2fb6dc65b", 00:26:03.007 "is_configured": true, 00:26:03.007 "data_offset": 0, 00:26:03.007 "data_size": 65536 00:26:03.007 }, 00:26:03.007 { 00:26:03.007 "name": "BaseBdev2", 00:26:03.007 "uuid": "66ae4e3c-c049-40bb-a51f-17d39a4569e6", 00:26:03.007 "is_configured": true, 00:26:03.007 "data_offset": 0, 00:26:03.007 "data_size": 65536 00:26:03.007 }, 00:26:03.007 { 00:26:03.007 "name": "BaseBdev3", 00:26:03.007 "uuid": "bf9798ba-6e43-4f41-a5bf-cb19d8d804c8", 00:26:03.007 "is_configured": true, 00:26:03.007 "data_offset": 0, 00:26:03.007 "data_size": 65536 00:26:03.007 }, 00:26:03.007 { 00:26:03.007 "name": "BaseBdev4", 00:26:03.007 "uuid": "12f71418-4f4e-4cb2-83eb-fd00770fb7e7", 00:26:03.007 "is_configured": true, 00:26:03.007 "data_offset": 0, 00:26:03.007 "data_size": 65536 00:26:03.007 } 00:26:03.007 ] 00:26:03.007 } 00:26:03.007 } 00:26:03.007 }' 00:26:03.007 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:03.007 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:26:03.007 BaseBdev2 00:26:03.007 BaseBdev3 00:26:03.007 BaseBdev4' 00:26:03.007 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:03.007 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:03.007 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:03.007 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:26:03.007 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.007 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:03.007 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.007 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.007 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:03.007 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:03.007 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:03.007 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:03.007 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:03.007 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.007 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.007 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.007 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:03.007 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:03.007 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:03.007 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:26:03.007 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.007 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.007 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:03.007 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.007 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:03.007 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:03.007 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:03.265 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:03.265 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:26:03.265 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.265 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.265 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.265 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:03.265 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:03.265 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:03.265 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.265 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.265 [2024-12-05 12:57:45.622717] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:03.265 [2024-12-05 12:57:45.622744] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:03.265 [2024-12-05 12:57:45.622811] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:03.265 [2024-12-05 12:57:45.623107] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:03.265 [2024-12-05 12:57:45.623117] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:26:03.265 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.265 12:57:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80190 00:26:03.265 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80190 ']' 00:26:03.265 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80190 00:26:03.265 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:26:03.265 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:03.265 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80190 00:26:03.265 killing process with pid 80190 00:26:03.265 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:03.265 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:03.265 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80190' 00:26:03.265 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80190 00:26:03.265 [2024-12-05 12:57:45.653561] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:03.265 12:57:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80190 00:26:03.522 [2024-12-05 12:57:45.895246] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:04.085 12:57:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:26:04.085 00:26:04.085 real 0m8.422s 00:26:04.085 user 0m13.325s 00:26:04.085 sys 0m1.452s 00:26:04.085 12:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:04.085 12:57:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:04.085 ************************************ 00:26:04.085 END TEST raid5f_state_function_test 00:26:04.085 ************************************ 00:26:04.085 12:57:46 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:26:04.085 12:57:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:04.085 12:57:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:04.085 12:57:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:04.341 ************************************ 00:26:04.341 START TEST raid5f_state_function_test_sb 00:26:04.341 ************************************ 00:26:04.341 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:26:04.341 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:26:04.341 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:26:04.341 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:26:04.341 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:26:04.341 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:26:04.341 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:04.341 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:26:04.341 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:04.341 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:04.341 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:26:04.341 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:04.341 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:04.341 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:26:04.341 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:04.341 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:04.341 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:26:04.341 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:04.342 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:04.342 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:04.342 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:26:04.342 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:26:04.342 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:26:04.342 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:26:04.342 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:26:04.342 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:26:04.342 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:26:04.342 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:26:04.342 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:26:04.342 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:26:04.342 Process raid pid: 80828 00:26:04.342 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80828 00:26:04.342 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80828' 00:26:04.342 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80828 00:26:04.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:04.342 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80828 ']' 00:26:04.342 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:04.342 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:04.342 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:04.342 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:04.342 12:57:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.342 12:57:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:26:04.342 [2024-12-05 12:57:46.743542] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:26:04.342 [2024-12-05 12:57:46.743737] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:04.342 [2024-12-05 12:57:46.904739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.598 [2024-12-05 12:57:47.007708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.598 [2024-12-05 12:57:47.147175] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:04.598 [2024-12-05 12:57:47.147208] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:05.164 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:05.164 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:26:05.164 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:26:05.164 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.164 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.164 [2024-12-05 12:57:47.604029] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:05.164 [2024-12-05 12:57:47.604077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:05.164 [2024-12-05 12:57:47.604086] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:05.164 [2024-12-05 12:57:47.604093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:05.164 [2024-12-05 12:57:47.604098] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:05.164 [2024-12-05 12:57:47.604105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:05.164 [2024-12-05 12:57:47.604111] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:05.164 [2024-12-05 12:57:47.604118] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:05.164 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.164 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:05.164 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:05.164 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:05.164 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:05.164 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:05.164 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:05.164 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:05.164 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:05.164 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:05.164 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:05.164 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:05.164 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:05.164 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.164 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.164 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.164 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:05.164 "name": "Existed_Raid", 00:26:05.164 "uuid": "88b44c7e-31bb-4ad3-b440-b7c971425bfb", 00:26:05.164 "strip_size_kb": 64, 00:26:05.164 "state": "configuring", 00:26:05.164 "raid_level": "raid5f", 00:26:05.164 "superblock": true, 00:26:05.164 "num_base_bdevs": 4, 00:26:05.164 "num_base_bdevs_discovered": 0, 00:26:05.164 "num_base_bdevs_operational": 4, 00:26:05.164 "base_bdevs_list": [ 00:26:05.164 { 00:26:05.164 "name": "BaseBdev1", 00:26:05.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:05.164 "is_configured": false, 00:26:05.164 "data_offset": 0, 00:26:05.164 "data_size": 0 00:26:05.164 }, 00:26:05.164 { 00:26:05.164 "name": "BaseBdev2", 00:26:05.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:05.164 "is_configured": false, 00:26:05.164 "data_offset": 0, 00:26:05.164 "data_size": 0 00:26:05.164 }, 00:26:05.164 { 00:26:05.164 "name": "BaseBdev3", 00:26:05.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:05.164 "is_configured": false, 00:26:05.164 "data_offset": 0, 00:26:05.164 "data_size": 0 00:26:05.164 }, 00:26:05.164 { 00:26:05.164 "name": "BaseBdev4", 00:26:05.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:05.164 "is_configured": false, 00:26:05.164 "data_offset": 0, 00:26:05.164 "data_size": 0 00:26:05.164 } 00:26:05.164 ] 00:26:05.164 }' 00:26:05.164 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:05.164 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.423 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:05.423 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.423 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.423 [2024-12-05 12:57:47.912048] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:05.423 [2024-12-05 12:57:47.912081] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:26:05.423 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.423 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:26:05.423 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.423 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.423 [2024-12-05 12:57:47.920048] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:05.423 [2024-12-05 12:57:47.920154] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:05.423 [2024-12-05 12:57:47.920206] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:05.423 [2024-12-05 12:57:47.920228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:05.423 [2024-12-05 12:57:47.920271] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:05.423 [2024-12-05 12:57:47.920292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:05.423 [2024-12-05 12:57:47.920307] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:05.423 [2024-12-05 12:57:47.920359] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:05.423 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.423 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:05.423 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.423 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.423 [2024-12-05 12:57:47.952185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:05.423 BaseBdev1 00:26:05.423 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.423 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:26:05.423 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:26:05.423 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:05.423 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:26:05.423 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:05.423 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:05.423 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:05.423 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.423 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.423 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.423 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:05.423 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.423 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.423 [ 00:26:05.423 { 00:26:05.423 "name": "BaseBdev1", 00:26:05.423 "aliases": [ 00:26:05.424 "39971066-e83f-4c9d-be46-7ab42eb69d1a" 00:26:05.424 ], 00:26:05.424 "product_name": "Malloc disk", 00:26:05.424 "block_size": 512, 00:26:05.424 "num_blocks": 65536, 00:26:05.424 "uuid": "39971066-e83f-4c9d-be46-7ab42eb69d1a", 00:26:05.424 "assigned_rate_limits": { 00:26:05.424 "rw_ios_per_sec": 0, 00:26:05.424 "rw_mbytes_per_sec": 0, 00:26:05.424 "r_mbytes_per_sec": 0, 00:26:05.424 "w_mbytes_per_sec": 0 00:26:05.424 }, 00:26:05.424 "claimed": true, 00:26:05.424 "claim_type": "exclusive_write", 00:26:05.424 "zoned": false, 00:26:05.424 "supported_io_types": { 00:26:05.424 "read": true, 00:26:05.424 "write": true, 00:26:05.424 "unmap": true, 00:26:05.424 "flush": true, 00:26:05.424 "reset": true, 00:26:05.424 "nvme_admin": false, 00:26:05.424 "nvme_io": false, 00:26:05.424 "nvme_io_md": false, 00:26:05.424 "write_zeroes": true, 00:26:05.424 "zcopy": true, 00:26:05.424 "get_zone_info": false, 00:26:05.424 "zone_management": false, 00:26:05.424 "zone_append": false, 00:26:05.424 "compare": false, 00:26:05.424 "compare_and_write": false, 00:26:05.424 "abort": true, 00:26:05.424 "seek_hole": false, 00:26:05.424 "seek_data": false, 00:26:05.424 "copy": true, 00:26:05.424 "nvme_iov_md": false 00:26:05.424 }, 00:26:05.424 "memory_domains": [ 00:26:05.424 { 00:26:05.424 "dma_device_id": "system", 00:26:05.424 "dma_device_type": 1 00:26:05.424 }, 00:26:05.424 { 00:26:05.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:05.424 "dma_device_type": 2 00:26:05.424 } 00:26:05.424 ], 00:26:05.424 "driver_specific": {} 00:26:05.424 } 00:26:05.424 ] 00:26:05.424 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.424 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:26:05.424 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:05.424 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:05.424 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:05.424 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:05.424 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:05.424 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:05.424 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:05.424 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:05.424 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:05.424 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:05.424 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:05.424 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.424 12:57:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:05.424 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.424 12:57:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.682 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:05.682 "name": "Existed_Raid", 00:26:05.682 "uuid": "8d162223-24e0-426f-8aae-96f0e48353e1", 00:26:05.682 "strip_size_kb": 64, 00:26:05.682 "state": "configuring", 00:26:05.682 "raid_level": "raid5f", 00:26:05.682 "superblock": true, 00:26:05.682 "num_base_bdevs": 4, 00:26:05.682 "num_base_bdevs_discovered": 1, 00:26:05.682 "num_base_bdevs_operational": 4, 00:26:05.682 "base_bdevs_list": [ 00:26:05.682 { 00:26:05.682 "name": "BaseBdev1", 00:26:05.682 "uuid": "39971066-e83f-4c9d-be46-7ab42eb69d1a", 00:26:05.682 "is_configured": true, 00:26:05.682 "data_offset": 2048, 00:26:05.682 "data_size": 63488 00:26:05.682 }, 00:26:05.682 { 00:26:05.682 "name": "BaseBdev2", 00:26:05.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:05.682 "is_configured": false, 00:26:05.682 "data_offset": 0, 00:26:05.682 "data_size": 0 00:26:05.682 }, 00:26:05.682 { 00:26:05.682 "name": "BaseBdev3", 00:26:05.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:05.682 "is_configured": false, 00:26:05.682 "data_offset": 0, 00:26:05.682 "data_size": 0 00:26:05.682 }, 00:26:05.682 { 00:26:05.682 "name": "BaseBdev4", 00:26:05.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:05.683 "is_configured": false, 00:26:05.683 "data_offset": 0, 00:26:05.683 "data_size": 0 00:26:05.683 } 00:26:05.683 ] 00:26:05.683 }' 00:26:05.683 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:05.683 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.941 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:05.941 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.941 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.941 [2024-12-05 12:57:48.300320] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:05.941 [2024-12-05 12:57:48.300507] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:26:05.941 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.941 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:26:05.941 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.941 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.941 [2024-12-05 12:57:48.308361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:05.941 [2024-12-05 12:57:48.309984] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:05.941 [2024-12-05 12:57:48.310094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:05.941 [2024-12-05 12:57:48.310147] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:05.941 [2024-12-05 12:57:48.310171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:05.941 [2024-12-05 12:57:48.310213] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:05.941 [2024-12-05 12:57:48.310234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:05.941 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.941 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:26:05.941 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:05.941 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:05.941 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:05.941 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:05.941 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:05.941 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:05.942 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:05.942 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:05.942 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:05.942 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:05.942 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:05.942 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:05.942 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:05.942 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.942 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.942 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.942 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:05.942 "name": "Existed_Raid", 00:26:05.942 "uuid": "b31e4686-f383-4077-a392-e0a1a232fe27", 00:26:05.942 "strip_size_kb": 64, 00:26:05.942 "state": "configuring", 00:26:05.942 "raid_level": "raid5f", 00:26:05.942 "superblock": true, 00:26:05.942 "num_base_bdevs": 4, 00:26:05.942 "num_base_bdevs_discovered": 1, 00:26:05.942 "num_base_bdevs_operational": 4, 00:26:05.942 "base_bdevs_list": [ 00:26:05.942 { 00:26:05.942 "name": "BaseBdev1", 00:26:05.942 "uuid": "39971066-e83f-4c9d-be46-7ab42eb69d1a", 00:26:05.942 "is_configured": true, 00:26:05.942 "data_offset": 2048, 00:26:05.942 "data_size": 63488 00:26:05.942 }, 00:26:05.942 { 00:26:05.942 "name": "BaseBdev2", 00:26:05.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:05.942 "is_configured": false, 00:26:05.942 "data_offset": 0, 00:26:05.942 "data_size": 0 00:26:05.942 }, 00:26:05.942 { 00:26:05.942 "name": "BaseBdev3", 00:26:05.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:05.942 "is_configured": false, 00:26:05.942 "data_offset": 0, 00:26:05.942 "data_size": 0 00:26:05.942 }, 00:26:05.942 { 00:26:05.942 "name": "BaseBdev4", 00:26:05.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:05.942 "is_configured": false, 00:26:05.942 "data_offset": 0, 00:26:05.942 "data_size": 0 00:26:05.942 } 00:26:05.942 ] 00:26:05.942 }' 00:26:05.942 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:05.942 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.236 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:06.236 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.236 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.236 [2024-12-05 12:57:48.670802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:06.236 BaseBdev2 00:26:06.236 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.236 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:26:06.236 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:26:06.236 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:06.236 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:26:06.237 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:06.237 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:06.237 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:06.237 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.237 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.237 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.237 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:06.237 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.237 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.237 [ 00:26:06.237 { 00:26:06.237 "name": "BaseBdev2", 00:26:06.237 "aliases": [ 00:26:06.237 "f4420b35-cc55-4fd8-ac75-9aabcbadc860" 00:26:06.237 ], 00:26:06.237 "product_name": "Malloc disk", 00:26:06.237 "block_size": 512, 00:26:06.237 "num_blocks": 65536, 00:26:06.237 "uuid": "f4420b35-cc55-4fd8-ac75-9aabcbadc860", 00:26:06.237 "assigned_rate_limits": { 00:26:06.237 "rw_ios_per_sec": 0, 00:26:06.237 "rw_mbytes_per_sec": 0, 00:26:06.237 "r_mbytes_per_sec": 0, 00:26:06.237 "w_mbytes_per_sec": 0 00:26:06.237 }, 00:26:06.237 "claimed": true, 00:26:06.237 "claim_type": "exclusive_write", 00:26:06.237 "zoned": false, 00:26:06.237 "supported_io_types": { 00:26:06.237 "read": true, 00:26:06.237 "write": true, 00:26:06.237 "unmap": true, 00:26:06.237 "flush": true, 00:26:06.237 "reset": true, 00:26:06.237 "nvme_admin": false, 00:26:06.237 "nvme_io": false, 00:26:06.237 "nvme_io_md": false, 00:26:06.237 "write_zeroes": true, 00:26:06.237 "zcopy": true, 00:26:06.237 "get_zone_info": false, 00:26:06.237 "zone_management": false, 00:26:06.237 "zone_append": false, 00:26:06.237 "compare": false, 00:26:06.237 "compare_and_write": false, 00:26:06.237 "abort": true, 00:26:06.237 "seek_hole": false, 00:26:06.237 "seek_data": false, 00:26:06.237 "copy": true, 00:26:06.237 "nvme_iov_md": false 00:26:06.237 }, 00:26:06.237 "memory_domains": [ 00:26:06.237 { 00:26:06.237 "dma_device_id": "system", 00:26:06.237 "dma_device_type": 1 00:26:06.237 }, 00:26:06.237 { 00:26:06.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:06.237 "dma_device_type": 2 00:26:06.237 } 00:26:06.237 ], 00:26:06.237 "driver_specific": {} 00:26:06.237 } 00:26:06.237 ] 00:26:06.237 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.237 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:26:06.237 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:06.237 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:06.237 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:06.237 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:06.237 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:06.237 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:06.237 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:06.237 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:06.237 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:06.237 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:06.237 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:06.237 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:06.237 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:06.237 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:06.237 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.237 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.237 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.237 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:06.237 "name": "Existed_Raid", 00:26:06.237 "uuid": "b31e4686-f383-4077-a392-e0a1a232fe27", 00:26:06.237 "strip_size_kb": 64, 00:26:06.237 "state": "configuring", 00:26:06.237 "raid_level": "raid5f", 00:26:06.237 "superblock": true, 00:26:06.237 "num_base_bdevs": 4, 00:26:06.237 "num_base_bdevs_discovered": 2, 00:26:06.237 "num_base_bdevs_operational": 4, 00:26:06.237 "base_bdevs_list": [ 00:26:06.237 { 00:26:06.237 "name": "BaseBdev1", 00:26:06.237 "uuid": "39971066-e83f-4c9d-be46-7ab42eb69d1a", 00:26:06.237 "is_configured": true, 00:26:06.237 "data_offset": 2048, 00:26:06.237 "data_size": 63488 00:26:06.237 }, 00:26:06.237 { 00:26:06.237 "name": "BaseBdev2", 00:26:06.237 "uuid": "f4420b35-cc55-4fd8-ac75-9aabcbadc860", 00:26:06.237 "is_configured": true, 00:26:06.237 "data_offset": 2048, 00:26:06.237 "data_size": 63488 00:26:06.237 }, 00:26:06.237 { 00:26:06.237 "name": "BaseBdev3", 00:26:06.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.237 "is_configured": false, 00:26:06.237 "data_offset": 0, 00:26:06.237 "data_size": 0 00:26:06.237 }, 00:26:06.237 { 00:26:06.237 "name": "BaseBdev4", 00:26:06.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.237 "is_configured": false, 00:26:06.237 "data_offset": 0, 00:26:06.237 "data_size": 0 00:26:06.237 } 00:26:06.237 ] 00:26:06.237 }' 00:26:06.237 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:06.237 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.496 12:57:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:26:06.496 12:57:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.496 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.496 [2024-12-05 12:57:49.044843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:06.496 BaseBdev3 00:26:06.496 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.496 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:26:06.496 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:26:06.496 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:06.496 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:26:06.496 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:06.496 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:06.496 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:06.496 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.496 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.496 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.496 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:06.496 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.496 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.496 [ 00:26:06.496 { 00:26:06.496 "name": "BaseBdev3", 00:26:06.496 "aliases": [ 00:26:06.496 "3a48ba20-0c61-43a1-ac4a-de23bb10fdc2" 00:26:06.496 ], 00:26:06.496 "product_name": "Malloc disk", 00:26:06.496 "block_size": 512, 00:26:06.496 "num_blocks": 65536, 00:26:06.496 "uuid": "3a48ba20-0c61-43a1-ac4a-de23bb10fdc2", 00:26:06.496 "assigned_rate_limits": { 00:26:06.496 "rw_ios_per_sec": 0, 00:26:06.496 "rw_mbytes_per_sec": 0, 00:26:06.496 "r_mbytes_per_sec": 0, 00:26:06.496 "w_mbytes_per_sec": 0 00:26:06.496 }, 00:26:06.496 "claimed": true, 00:26:06.496 "claim_type": "exclusive_write", 00:26:06.496 "zoned": false, 00:26:06.496 "supported_io_types": { 00:26:06.496 "read": true, 00:26:06.496 "write": true, 00:26:06.496 "unmap": true, 00:26:06.496 "flush": true, 00:26:06.496 "reset": true, 00:26:06.496 "nvme_admin": false, 00:26:06.496 "nvme_io": false, 00:26:06.496 "nvme_io_md": false, 00:26:06.496 "write_zeroes": true, 00:26:06.496 "zcopy": true, 00:26:06.496 "get_zone_info": false, 00:26:06.496 "zone_management": false, 00:26:06.496 "zone_append": false, 00:26:06.496 "compare": false, 00:26:06.496 "compare_and_write": false, 00:26:06.496 "abort": true, 00:26:06.496 "seek_hole": false, 00:26:06.496 "seek_data": false, 00:26:06.496 "copy": true, 00:26:06.496 "nvme_iov_md": false 00:26:06.496 }, 00:26:06.496 "memory_domains": [ 00:26:06.496 { 00:26:06.496 "dma_device_id": "system", 00:26:06.496 "dma_device_type": 1 00:26:06.496 }, 00:26:06.496 { 00:26:06.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:06.496 "dma_device_type": 2 00:26:06.496 } 00:26:06.496 ], 00:26:06.496 "driver_specific": {} 00:26:06.496 } 00:26:06.496 ] 00:26:06.496 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.496 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:26:06.496 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:06.496 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:06.496 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:06.496 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:06.496 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:06.496 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:06.496 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:06.496 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:06.497 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:06.497 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:06.497 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:06.497 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:06.497 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:06.497 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:06.497 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.497 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.755 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.755 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:06.755 "name": "Existed_Raid", 00:26:06.755 "uuid": "b31e4686-f383-4077-a392-e0a1a232fe27", 00:26:06.755 "strip_size_kb": 64, 00:26:06.755 "state": "configuring", 00:26:06.755 "raid_level": "raid5f", 00:26:06.755 "superblock": true, 00:26:06.755 "num_base_bdevs": 4, 00:26:06.755 "num_base_bdevs_discovered": 3, 00:26:06.755 "num_base_bdevs_operational": 4, 00:26:06.755 "base_bdevs_list": [ 00:26:06.755 { 00:26:06.755 "name": "BaseBdev1", 00:26:06.755 "uuid": "39971066-e83f-4c9d-be46-7ab42eb69d1a", 00:26:06.755 "is_configured": true, 00:26:06.755 "data_offset": 2048, 00:26:06.755 "data_size": 63488 00:26:06.755 }, 00:26:06.755 { 00:26:06.755 "name": "BaseBdev2", 00:26:06.755 "uuid": "f4420b35-cc55-4fd8-ac75-9aabcbadc860", 00:26:06.755 "is_configured": true, 00:26:06.755 "data_offset": 2048, 00:26:06.755 "data_size": 63488 00:26:06.755 }, 00:26:06.755 { 00:26:06.755 "name": "BaseBdev3", 00:26:06.755 "uuid": "3a48ba20-0c61-43a1-ac4a-de23bb10fdc2", 00:26:06.755 "is_configured": true, 00:26:06.755 "data_offset": 2048, 00:26:06.755 "data_size": 63488 00:26:06.755 }, 00:26:06.755 { 00:26:06.755 "name": "BaseBdev4", 00:26:06.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.755 "is_configured": false, 00:26:06.755 "data_offset": 0, 00:26:06.755 "data_size": 0 00:26:06.755 } 00:26:06.755 ] 00:26:06.755 }' 00:26:06.755 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:06.755 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.014 [2024-12-05 12:57:49.407921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:07.014 [2024-12-05 12:57:49.408317] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:07.014 [2024-12-05 12:57:49.408337] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:07.014 [2024-12-05 12:57:49.408642] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:26:07.014 BaseBdev4 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.014 [2024-12-05 12:57:49.413631] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:07.014 [2024-12-05 12:57:49.413652] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:26:07.014 [2024-12-05 12:57:49.413882] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.014 [ 00:26:07.014 { 00:26:07.014 "name": "BaseBdev4", 00:26:07.014 "aliases": [ 00:26:07.014 "a793caf2-7098-46c8-9ca5-5cb5a29bfc4d" 00:26:07.014 ], 00:26:07.014 "product_name": "Malloc disk", 00:26:07.014 "block_size": 512, 00:26:07.014 "num_blocks": 65536, 00:26:07.014 "uuid": "a793caf2-7098-46c8-9ca5-5cb5a29bfc4d", 00:26:07.014 "assigned_rate_limits": { 00:26:07.014 "rw_ios_per_sec": 0, 00:26:07.014 "rw_mbytes_per_sec": 0, 00:26:07.014 "r_mbytes_per_sec": 0, 00:26:07.014 "w_mbytes_per_sec": 0 00:26:07.014 }, 00:26:07.014 "claimed": true, 00:26:07.014 "claim_type": "exclusive_write", 00:26:07.014 "zoned": false, 00:26:07.014 "supported_io_types": { 00:26:07.014 "read": true, 00:26:07.014 "write": true, 00:26:07.014 "unmap": true, 00:26:07.014 "flush": true, 00:26:07.014 "reset": true, 00:26:07.014 "nvme_admin": false, 00:26:07.014 "nvme_io": false, 00:26:07.014 "nvme_io_md": false, 00:26:07.014 "write_zeroes": true, 00:26:07.014 "zcopy": true, 00:26:07.014 "get_zone_info": false, 00:26:07.014 "zone_management": false, 00:26:07.014 "zone_append": false, 00:26:07.014 "compare": false, 00:26:07.014 "compare_and_write": false, 00:26:07.014 "abort": true, 00:26:07.014 "seek_hole": false, 00:26:07.014 "seek_data": false, 00:26:07.014 "copy": true, 00:26:07.014 "nvme_iov_md": false 00:26:07.014 }, 00:26:07.014 "memory_domains": [ 00:26:07.014 { 00:26:07.014 "dma_device_id": "system", 00:26:07.014 "dma_device_type": 1 00:26:07.014 }, 00:26:07.014 { 00:26:07.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:07.014 "dma_device_type": 2 00:26:07.014 } 00:26:07.014 ], 00:26:07.014 "driver_specific": {} 00:26:07.014 } 00:26:07.014 ] 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:07.014 "name": "Existed_Raid", 00:26:07.014 "uuid": "b31e4686-f383-4077-a392-e0a1a232fe27", 00:26:07.014 "strip_size_kb": 64, 00:26:07.014 "state": "online", 00:26:07.014 "raid_level": "raid5f", 00:26:07.014 "superblock": true, 00:26:07.014 "num_base_bdevs": 4, 00:26:07.014 "num_base_bdevs_discovered": 4, 00:26:07.014 "num_base_bdevs_operational": 4, 00:26:07.014 "base_bdevs_list": [ 00:26:07.014 { 00:26:07.014 "name": "BaseBdev1", 00:26:07.014 "uuid": "39971066-e83f-4c9d-be46-7ab42eb69d1a", 00:26:07.014 "is_configured": true, 00:26:07.014 "data_offset": 2048, 00:26:07.014 "data_size": 63488 00:26:07.014 }, 00:26:07.014 { 00:26:07.014 "name": "BaseBdev2", 00:26:07.014 "uuid": "f4420b35-cc55-4fd8-ac75-9aabcbadc860", 00:26:07.014 "is_configured": true, 00:26:07.014 "data_offset": 2048, 00:26:07.014 "data_size": 63488 00:26:07.014 }, 00:26:07.014 { 00:26:07.014 "name": "BaseBdev3", 00:26:07.014 "uuid": "3a48ba20-0c61-43a1-ac4a-de23bb10fdc2", 00:26:07.014 "is_configured": true, 00:26:07.014 "data_offset": 2048, 00:26:07.014 "data_size": 63488 00:26:07.014 }, 00:26:07.014 { 00:26:07.014 "name": "BaseBdev4", 00:26:07.014 "uuid": "a793caf2-7098-46c8-9ca5-5cb5a29bfc4d", 00:26:07.014 "is_configured": true, 00:26:07.014 "data_offset": 2048, 00:26:07.014 "data_size": 63488 00:26:07.014 } 00:26:07.014 ] 00:26:07.014 }' 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:07.014 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.273 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:26:07.273 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:07.273 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:07.273 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:07.273 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:26:07.273 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:07.273 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:07.273 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:07.273 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.273 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.273 [2024-12-05 12:57:49.755529] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:07.273 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.273 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:07.273 "name": "Existed_Raid", 00:26:07.273 "aliases": [ 00:26:07.273 "b31e4686-f383-4077-a392-e0a1a232fe27" 00:26:07.273 ], 00:26:07.273 "product_name": "Raid Volume", 00:26:07.273 "block_size": 512, 00:26:07.273 "num_blocks": 190464, 00:26:07.273 "uuid": "b31e4686-f383-4077-a392-e0a1a232fe27", 00:26:07.273 "assigned_rate_limits": { 00:26:07.273 "rw_ios_per_sec": 0, 00:26:07.273 "rw_mbytes_per_sec": 0, 00:26:07.273 "r_mbytes_per_sec": 0, 00:26:07.273 "w_mbytes_per_sec": 0 00:26:07.273 }, 00:26:07.273 "claimed": false, 00:26:07.273 "zoned": false, 00:26:07.273 "supported_io_types": { 00:26:07.273 "read": true, 00:26:07.273 "write": true, 00:26:07.273 "unmap": false, 00:26:07.273 "flush": false, 00:26:07.273 "reset": true, 00:26:07.273 "nvme_admin": false, 00:26:07.273 "nvme_io": false, 00:26:07.273 "nvme_io_md": false, 00:26:07.273 "write_zeroes": true, 00:26:07.273 "zcopy": false, 00:26:07.273 "get_zone_info": false, 00:26:07.273 "zone_management": false, 00:26:07.273 "zone_append": false, 00:26:07.273 "compare": false, 00:26:07.273 "compare_and_write": false, 00:26:07.273 "abort": false, 00:26:07.273 "seek_hole": false, 00:26:07.273 "seek_data": false, 00:26:07.273 "copy": false, 00:26:07.273 "nvme_iov_md": false 00:26:07.273 }, 00:26:07.273 "driver_specific": { 00:26:07.273 "raid": { 00:26:07.273 "uuid": "b31e4686-f383-4077-a392-e0a1a232fe27", 00:26:07.273 "strip_size_kb": 64, 00:26:07.273 "state": "online", 00:26:07.273 "raid_level": "raid5f", 00:26:07.273 "superblock": true, 00:26:07.273 "num_base_bdevs": 4, 00:26:07.273 "num_base_bdevs_discovered": 4, 00:26:07.273 "num_base_bdevs_operational": 4, 00:26:07.273 "base_bdevs_list": [ 00:26:07.273 { 00:26:07.273 "name": "BaseBdev1", 00:26:07.273 "uuid": "39971066-e83f-4c9d-be46-7ab42eb69d1a", 00:26:07.273 "is_configured": true, 00:26:07.273 "data_offset": 2048, 00:26:07.273 "data_size": 63488 00:26:07.273 }, 00:26:07.273 { 00:26:07.273 "name": "BaseBdev2", 00:26:07.273 "uuid": "f4420b35-cc55-4fd8-ac75-9aabcbadc860", 00:26:07.273 "is_configured": true, 00:26:07.273 "data_offset": 2048, 00:26:07.273 "data_size": 63488 00:26:07.273 }, 00:26:07.273 { 00:26:07.273 "name": "BaseBdev3", 00:26:07.273 "uuid": "3a48ba20-0c61-43a1-ac4a-de23bb10fdc2", 00:26:07.273 "is_configured": true, 00:26:07.273 "data_offset": 2048, 00:26:07.273 "data_size": 63488 00:26:07.273 }, 00:26:07.273 { 00:26:07.273 "name": "BaseBdev4", 00:26:07.273 "uuid": "a793caf2-7098-46c8-9ca5-5cb5a29bfc4d", 00:26:07.273 "is_configured": true, 00:26:07.273 "data_offset": 2048, 00:26:07.273 "data_size": 63488 00:26:07.273 } 00:26:07.273 ] 00:26:07.273 } 00:26:07.273 } 00:26:07.273 }' 00:26:07.273 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:07.273 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:26:07.273 BaseBdev2 00:26:07.273 BaseBdev3 00:26:07.273 BaseBdev4' 00:26:07.273 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:07.273 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:07.273 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:07.273 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:26:07.273 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.273 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.273 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:07.273 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.531 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:07.531 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:07.531 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:07.531 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:07.531 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.531 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:07.531 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.531 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.531 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:07.531 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:07.531 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:07.531 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:07.531 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:26:07.531 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.531 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.531 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.531 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:07.531 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:07.531 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:07.531 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:26:07.531 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.531 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.531 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:07.531 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.531 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:07.531 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:07.531 12:57:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:07.531 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.531 12:57:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.531 [2024-12-05 12:57:49.979368] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:07.531 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.531 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:26:07.531 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:26:07.531 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:07.531 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:26:07.531 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:26:07.531 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:26:07.531 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:07.531 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:07.531 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:07.531 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:07.531 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:07.531 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:07.531 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:07.531 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:07.531 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:07.531 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:07.531 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.531 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.531 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:07.531 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.532 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:07.532 "name": "Existed_Raid", 00:26:07.532 "uuid": "b31e4686-f383-4077-a392-e0a1a232fe27", 00:26:07.532 "strip_size_kb": 64, 00:26:07.532 "state": "online", 00:26:07.532 "raid_level": "raid5f", 00:26:07.532 "superblock": true, 00:26:07.532 "num_base_bdevs": 4, 00:26:07.532 "num_base_bdevs_discovered": 3, 00:26:07.532 "num_base_bdevs_operational": 3, 00:26:07.532 "base_bdevs_list": [ 00:26:07.532 { 00:26:07.532 "name": null, 00:26:07.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:07.532 "is_configured": false, 00:26:07.532 "data_offset": 0, 00:26:07.532 "data_size": 63488 00:26:07.532 }, 00:26:07.532 { 00:26:07.532 "name": "BaseBdev2", 00:26:07.532 "uuid": "f4420b35-cc55-4fd8-ac75-9aabcbadc860", 00:26:07.532 "is_configured": true, 00:26:07.532 "data_offset": 2048, 00:26:07.532 "data_size": 63488 00:26:07.532 }, 00:26:07.532 { 00:26:07.532 "name": "BaseBdev3", 00:26:07.532 "uuid": "3a48ba20-0c61-43a1-ac4a-de23bb10fdc2", 00:26:07.532 "is_configured": true, 00:26:07.532 "data_offset": 2048, 00:26:07.532 "data_size": 63488 00:26:07.532 }, 00:26:07.532 { 00:26:07.532 "name": "BaseBdev4", 00:26:07.532 "uuid": "a793caf2-7098-46c8-9ca5-5cb5a29bfc4d", 00:26:07.532 "is_configured": true, 00:26:07.532 "data_offset": 2048, 00:26:07.532 "data_size": 63488 00:26:07.532 } 00:26:07.532 ] 00:26:07.532 }' 00:26:07.532 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:07.532 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.789 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:26:07.789 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:07.789 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:07.789 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:07.789 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.789 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.047 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.047 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:08.047 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:08.047 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:26:08.047 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.047 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.047 [2024-12-05 12:57:50.410034] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:08.047 [2024-12-05 12:57:50.410186] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:08.047 [2024-12-05 12:57:50.469565] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:08.047 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.047 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:08.047 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:08.047 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:08.047 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:08.047 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.047 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.047 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.047 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:08.047 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:08.047 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:26:08.047 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.047 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.047 [2024-12-05 12:57:50.509602] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:08.047 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.047 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:08.047 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:08.047 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:08.047 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.047 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:08.047 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.047 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.047 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:08.047 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:08.047 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:26:08.047 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.047 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.047 [2024-12-05 12:57:50.608775] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:26:08.047 [2024-12-05 12:57:50.608925] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:26:08.306 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.306 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:08.306 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:08.306 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:26:08.306 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:08.306 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.306 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.306 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.306 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:26:08.306 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:26:08.306 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:26:08.306 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:26:08.306 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:08.306 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:08.306 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.306 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.306 BaseBdev2 00:26:08.306 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.306 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:26:08.306 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:26:08.306 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:08.306 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:26:08.306 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:08.306 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:08.306 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:08.306 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.306 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.306 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.306 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:08.306 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.307 [ 00:26:08.307 { 00:26:08.307 "name": "BaseBdev2", 00:26:08.307 "aliases": [ 00:26:08.307 "a1a3aae5-921d-4415-be68-8c1d8f5675fc" 00:26:08.307 ], 00:26:08.307 "product_name": "Malloc disk", 00:26:08.307 "block_size": 512, 00:26:08.307 "num_blocks": 65536, 00:26:08.307 "uuid": "a1a3aae5-921d-4415-be68-8c1d8f5675fc", 00:26:08.307 "assigned_rate_limits": { 00:26:08.307 "rw_ios_per_sec": 0, 00:26:08.307 "rw_mbytes_per_sec": 0, 00:26:08.307 "r_mbytes_per_sec": 0, 00:26:08.307 "w_mbytes_per_sec": 0 00:26:08.307 }, 00:26:08.307 "claimed": false, 00:26:08.307 "zoned": false, 00:26:08.307 "supported_io_types": { 00:26:08.307 "read": true, 00:26:08.307 "write": true, 00:26:08.307 "unmap": true, 00:26:08.307 "flush": true, 00:26:08.307 "reset": true, 00:26:08.307 "nvme_admin": false, 00:26:08.307 "nvme_io": false, 00:26:08.307 "nvme_io_md": false, 00:26:08.307 "write_zeroes": true, 00:26:08.307 "zcopy": true, 00:26:08.307 "get_zone_info": false, 00:26:08.307 "zone_management": false, 00:26:08.307 "zone_append": false, 00:26:08.307 "compare": false, 00:26:08.307 "compare_and_write": false, 00:26:08.307 "abort": true, 00:26:08.307 "seek_hole": false, 00:26:08.307 "seek_data": false, 00:26:08.307 "copy": true, 00:26:08.307 "nvme_iov_md": false 00:26:08.307 }, 00:26:08.307 "memory_domains": [ 00:26:08.307 { 00:26:08.307 "dma_device_id": "system", 00:26:08.307 "dma_device_type": 1 00:26:08.307 }, 00:26:08.307 { 00:26:08.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.307 "dma_device_type": 2 00:26:08.307 } 00:26:08.307 ], 00:26:08.307 "driver_specific": {} 00:26:08.307 } 00:26:08.307 ] 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.307 BaseBdev3 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.307 [ 00:26:08.307 { 00:26:08.307 "name": "BaseBdev3", 00:26:08.307 "aliases": [ 00:26:08.307 "c4e61c38-60af-4bca-ab1e-0d729a18ed03" 00:26:08.307 ], 00:26:08.307 "product_name": "Malloc disk", 00:26:08.307 "block_size": 512, 00:26:08.307 "num_blocks": 65536, 00:26:08.307 "uuid": "c4e61c38-60af-4bca-ab1e-0d729a18ed03", 00:26:08.307 "assigned_rate_limits": { 00:26:08.307 "rw_ios_per_sec": 0, 00:26:08.307 "rw_mbytes_per_sec": 0, 00:26:08.307 "r_mbytes_per_sec": 0, 00:26:08.307 "w_mbytes_per_sec": 0 00:26:08.307 }, 00:26:08.307 "claimed": false, 00:26:08.307 "zoned": false, 00:26:08.307 "supported_io_types": { 00:26:08.307 "read": true, 00:26:08.307 "write": true, 00:26:08.307 "unmap": true, 00:26:08.307 "flush": true, 00:26:08.307 "reset": true, 00:26:08.307 "nvme_admin": false, 00:26:08.307 "nvme_io": false, 00:26:08.307 "nvme_io_md": false, 00:26:08.307 "write_zeroes": true, 00:26:08.307 "zcopy": true, 00:26:08.307 "get_zone_info": false, 00:26:08.307 "zone_management": false, 00:26:08.307 "zone_append": false, 00:26:08.307 "compare": false, 00:26:08.307 "compare_and_write": false, 00:26:08.307 "abort": true, 00:26:08.307 "seek_hole": false, 00:26:08.307 "seek_data": false, 00:26:08.307 "copy": true, 00:26:08.307 "nvme_iov_md": false 00:26:08.307 }, 00:26:08.307 "memory_domains": [ 00:26:08.307 { 00:26:08.307 "dma_device_id": "system", 00:26:08.307 "dma_device_type": 1 00:26:08.307 }, 00:26:08.307 { 00:26:08.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.307 "dma_device_type": 2 00:26:08.307 } 00:26:08.307 ], 00:26:08.307 "driver_specific": {} 00:26:08.307 } 00:26:08.307 ] 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.307 BaseBdev4 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.307 [ 00:26:08.307 { 00:26:08.307 "name": "BaseBdev4", 00:26:08.307 "aliases": [ 00:26:08.307 "878da251-2f1b-49ec-9bf6-18bb465e01c3" 00:26:08.307 ], 00:26:08.307 "product_name": "Malloc disk", 00:26:08.307 "block_size": 512, 00:26:08.307 "num_blocks": 65536, 00:26:08.307 "uuid": "878da251-2f1b-49ec-9bf6-18bb465e01c3", 00:26:08.307 "assigned_rate_limits": { 00:26:08.307 "rw_ios_per_sec": 0, 00:26:08.307 "rw_mbytes_per_sec": 0, 00:26:08.307 "r_mbytes_per_sec": 0, 00:26:08.307 "w_mbytes_per_sec": 0 00:26:08.307 }, 00:26:08.307 "claimed": false, 00:26:08.307 "zoned": false, 00:26:08.307 "supported_io_types": { 00:26:08.307 "read": true, 00:26:08.307 "write": true, 00:26:08.307 "unmap": true, 00:26:08.307 "flush": true, 00:26:08.307 "reset": true, 00:26:08.307 "nvme_admin": false, 00:26:08.307 "nvme_io": false, 00:26:08.307 "nvme_io_md": false, 00:26:08.307 "write_zeroes": true, 00:26:08.307 "zcopy": true, 00:26:08.307 "get_zone_info": false, 00:26:08.307 "zone_management": false, 00:26:08.307 "zone_append": false, 00:26:08.307 "compare": false, 00:26:08.307 "compare_and_write": false, 00:26:08.307 "abort": true, 00:26:08.307 "seek_hole": false, 00:26:08.307 "seek_data": false, 00:26:08.307 "copy": true, 00:26:08.307 "nvme_iov_md": false 00:26:08.307 }, 00:26:08.307 "memory_domains": [ 00:26:08.307 { 00:26:08.307 "dma_device_id": "system", 00:26:08.307 "dma_device_type": 1 00:26:08.307 }, 00:26:08.307 { 00:26:08.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.307 "dma_device_type": 2 00:26:08.307 } 00:26:08.307 ], 00:26:08.307 "driver_specific": {} 00:26:08.307 } 00:26:08.307 ] 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:26:08.307 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.308 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.308 [2024-12-05 12:57:50.880971] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:08.308 [2024-12-05 12:57:50.881110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:08.308 [2024-12-05 12:57:50.881182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:08.308 [2024-12-05 12:57:50.883067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:08.308 [2024-12-05 12:57:50.883200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:08.308 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.308 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:08.308 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:08.308 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:08.308 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:08.308 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:08.308 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:08.308 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:08.308 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:08.308 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:08.308 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:08.308 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:08.308 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:08.566 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.566 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.566 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.566 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:08.566 "name": "Existed_Raid", 00:26:08.566 "uuid": "dbefe754-68af-4ea7-a2df-515139239999", 00:26:08.566 "strip_size_kb": 64, 00:26:08.566 "state": "configuring", 00:26:08.566 "raid_level": "raid5f", 00:26:08.566 "superblock": true, 00:26:08.566 "num_base_bdevs": 4, 00:26:08.566 "num_base_bdevs_discovered": 3, 00:26:08.566 "num_base_bdevs_operational": 4, 00:26:08.566 "base_bdevs_list": [ 00:26:08.566 { 00:26:08.566 "name": "BaseBdev1", 00:26:08.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:08.566 "is_configured": false, 00:26:08.566 "data_offset": 0, 00:26:08.566 "data_size": 0 00:26:08.566 }, 00:26:08.566 { 00:26:08.566 "name": "BaseBdev2", 00:26:08.566 "uuid": "a1a3aae5-921d-4415-be68-8c1d8f5675fc", 00:26:08.566 "is_configured": true, 00:26:08.566 "data_offset": 2048, 00:26:08.566 "data_size": 63488 00:26:08.566 }, 00:26:08.566 { 00:26:08.566 "name": "BaseBdev3", 00:26:08.566 "uuid": "c4e61c38-60af-4bca-ab1e-0d729a18ed03", 00:26:08.566 "is_configured": true, 00:26:08.566 "data_offset": 2048, 00:26:08.566 "data_size": 63488 00:26:08.566 }, 00:26:08.566 { 00:26:08.566 "name": "BaseBdev4", 00:26:08.566 "uuid": "878da251-2f1b-49ec-9bf6-18bb465e01c3", 00:26:08.566 "is_configured": true, 00:26:08.566 "data_offset": 2048, 00:26:08.566 "data_size": 63488 00:26:08.566 } 00:26:08.566 ] 00:26:08.566 }' 00:26:08.566 12:57:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:08.566 12:57:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.824 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:26:08.824 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.824 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.824 [2024-12-05 12:57:51.201061] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:08.824 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.824 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:08.824 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:08.824 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:08.824 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:08.824 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:08.824 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:08.824 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:08.824 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:08.824 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:08.824 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:08.824 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:08.824 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:08.824 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.824 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.824 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.824 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:08.824 "name": "Existed_Raid", 00:26:08.824 "uuid": "dbefe754-68af-4ea7-a2df-515139239999", 00:26:08.824 "strip_size_kb": 64, 00:26:08.825 "state": "configuring", 00:26:08.825 "raid_level": "raid5f", 00:26:08.825 "superblock": true, 00:26:08.825 "num_base_bdevs": 4, 00:26:08.825 "num_base_bdevs_discovered": 2, 00:26:08.825 "num_base_bdevs_operational": 4, 00:26:08.825 "base_bdevs_list": [ 00:26:08.825 { 00:26:08.825 "name": "BaseBdev1", 00:26:08.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:08.825 "is_configured": false, 00:26:08.825 "data_offset": 0, 00:26:08.825 "data_size": 0 00:26:08.825 }, 00:26:08.825 { 00:26:08.825 "name": null, 00:26:08.825 "uuid": "a1a3aae5-921d-4415-be68-8c1d8f5675fc", 00:26:08.825 "is_configured": false, 00:26:08.825 "data_offset": 0, 00:26:08.825 "data_size": 63488 00:26:08.825 }, 00:26:08.825 { 00:26:08.825 "name": "BaseBdev3", 00:26:08.825 "uuid": "c4e61c38-60af-4bca-ab1e-0d729a18ed03", 00:26:08.825 "is_configured": true, 00:26:08.825 "data_offset": 2048, 00:26:08.825 "data_size": 63488 00:26:08.825 }, 00:26:08.825 { 00:26:08.825 "name": "BaseBdev4", 00:26:08.825 "uuid": "878da251-2f1b-49ec-9bf6-18bb465e01c3", 00:26:08.825 "is_configured": true, 00:26:08.825 "data_offset": 2048, 00:26:08.825 "data_size": 63488 00:26:08.825 } 00:26:08.825 ] 00:26:08.825 }' 00:26:08.825 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:08.825 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:09.083 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:09.083 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.083 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:09.083 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:09.083 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.083 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:26:09.083 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:09.083 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.083 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:09.083 [2024-12-05 12:57:51.584153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:09.083 BaseBdev1 00:26:09.083 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.083 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:26:09.083 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:26:09.083 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:09.083 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:26:09.083 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:09.083 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:09.083 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:09.083 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.083 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:09.083 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.083 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:09.083 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.083 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:09.083 [ 00:26:09.083 { 00:26:09.083 "name": "BaseBdev1", 00:26:09.083 "aliases": [ 00:26:09.083 "435856e5-e444-4f14-abc6-389f2386e7d9" 00:26:09.083 ], 00:26:09.083 "product_name": "Malloc disk", 00:26:09.083 "block_size": 512, 00:26:09.083 "num_blocks": 65536, 00:26:09.083 "uuid": "435856e5-e444-4f14-abc6-389f2386e7d9", 00:26:09.083 "assigned_rate_limits": { 00:26:09.083 "rw_ios_per_sec": 0, 00:26:09.083 "rw_mbytes_per_sec": 0, 00:26:09.083 "r_mbytes_per_sec": 0, 00:26:09.083 "w_mbytes_per_sec": 0 00:26:09.083 }, 00:26:09.083 "claimed": true, 00:26:09.083 "claim_type": "exclusive_write", 00:26:09.083 "zoned": false, 00:26:09.083 "supported_io_types": { 00:26:09.083 "read": true, 00:26:09.083 "write": true, 00:26:09.083 "unmap": true, 00:26:09.083 "flush": true, 00:26:09.083 "reset": true, 00:26:09.083 "nvme_admin": false, 00:26:09.083 "nvme_io": false, 00:26:09.083 "nvme_io_md": false, 00:26:09.083 "write_zeroes": true, 00:26:09.083 "zcopy": true, 00:26:09.083 "get_zone_info": false, 00:26:09.083 "zone_management": false, 00:26:09.083 "zone_append": false, 00:26:09.083 "compare": false, 00:26:09.083 "compare_and_write": false, 00:26:09.083 "abort": true, 00:26:09.083 "seek_hole": false, 00:26:09.083 "seek_data": false, 00:26:09.083 "copy": true, 00:26:09.083 "nvme_iov_md": false 00:26:09.083 }, 00:26:09.083 "memory_domains": [ 00:26:09.083 { 00:26:09.083 "dma_device_id": "system", 00:26:09.083 "dma_device_type": 1 00:26:09.083 }, 00:26:09.083 { 00:26:09.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:09.083 "dma_device_type": 2 00:26:09.083 } 00:26:09.083 ], 00:26:09.083 "driver_specific": {} 00:26:09.083 } 00:26:09.083 ] 00:26:09.083 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.083 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:26:09.083 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:09.083 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:09.083 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:09.083 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:09.083 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:09.083 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:09.084 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:09.084 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:09.084 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:09.084 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:09.084 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:09.084 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.084 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:09.084 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:09.084 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.084 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:09.084 "name": "Existed_Raid", 00:26:09.084 "uuid": "dbefe754-68af-4ea7-a2df-515139239999", 00:26:09.084 "strip_size_kb": 64, 00:26:09.084 "state": "configuring", 00:26:09.084 "raid_level": "raid5f", 00:26:09.084 "superblock": true, 00:26:09.084 "num_base_bdevs": 4, 00:26:09.084 "num_base_bdevs_discovered": 3, 00:26:09.084 "num_base_bdevs_operational": 4, 00:26:09.084 "base_bdevs_list": [ 00:26:09.084 { 00:26:09.084 "name": "BaseBdev1", 00:26:09.084 "uuid": "435856e5-e444-4f14-abc6-389f2386e7d9", 00:26:09.084 "is_configured": true, 00:26:09.084 "data_offset": 2048, 00:26:09.084 "data_size": 63488 00:26:09.084 }, 00:26:09.084 { 00:26:09.084 "name": null, 00:26:09.084 "uuid": "a1a3aae5-921d-4415-be68-8c1d8f5675fc", 00:26:09.084 "is_configured": false, 00:26:09.084 "data_offset": 0, 00:26:09.084 "data_size": 63488 00:26:09.084 }, 00:26:09.084 { 00:26:09.084 "name": "BaseBdev3", 00:26:09.084 "uuid": "c4e61c38-60af-4bca-ab1e-0d729a18ed03", 00:26:09.084 "is_configured": true, 00:26:09.084 "data_offset": 2048, 00:26:09.084 "data_size": 63488 00:26:09.084 }, 00:26:09.084 { 00:26:09.084 "name": "BaseBdev4", 00:26:09.084 "uuid": "878da251-2f1b-49ec-9bf6-18bb465e01c3", 00:26:09.084 "is_configured": true, 00:26:09.084 "data_offset": 2048, 00:26:09.084 "data_size": 63488 00:26:09.084 } 00:26:09.084 ] 00:26:09.084 }' 00:26:09.084 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:09.084 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:09.341 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:09.342 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:09.342 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.342 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:09.601 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.601 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:26:09.601 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:26:09.601 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.601 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:09.601 [2024-12-05 12:57:51.948314] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:09.601 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.601 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:09.601 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:09.601 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:09.601 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:09.601 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:09.601 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:09.601 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:09.601 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:09.601 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:09.601 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:09.601 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:09.601 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.601 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:09.601 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:09.601 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.601 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:09.601 "name": "Existed_Raid", 00:26:09.601 "uuid": "dbefe754-68af-4ea7-a2df-515139239999", 00:26:09.601 "strip_size_kb": 64, 00:26:09.601 "state": "configuring", 00:26:09.601 "raid_level": "raid5f", 00:26:09.601 "superblock": true, 00:26:09.601 "num_base_bdevs": 4, 00:26:09.601 "num_base_bdevs_discovered": 2, 00:26:09.601 "num_base_bdevs_operational": 4, 00:26:09.601 "base_bdevs_list": [ 00:26:09.601 { 00:26:09.601 "name": "BaseBdev1", 00:26:09.601 "uuid": "435856e5-e444-4f14-abc6-389f2386e7d9", 00:26:09.601 "is_configured": true, 00:26:09.601 "data_offset": 2048, 00:26:09.601 "data_size": 63488 00:26:09.601 }, 00:26:09.601 { 00:26:09.601 "name": null, 00:26:09.601 "uuid": "a1a3aae5-921d-4415-be68-8c1d8f5675fc", 00:26:09.601 "is_configured": false, 00:26:09.601 "data_offset": 0, 00:26:09.601 "data_size": 63488 00:26:09.601 }, 00:26:09.601 { 00:26:09.601 "name": null, 00:26:09.601 "uuid": "c4e61c38-60af-4bca-ab1e-0d729a18ed03", 00:26:09.601 "is_configured": false, 00:26:09.601 "data_offset": 0, 00:26:09.601 "data_size": 63488 00:26:09.601 }, 00:26:09.601 { 00:26:09.601 "name": "BaseBdev4", 00:26:09.601 "uuid": "878da251-2f1b-49ec-9bf6-18bb465e01c3", 00:26:09.601 "is_configured": true, 00:26:09.601 "data_offset": 2048, 00:26:09.601 "data_size": 63488 00:26:09.601 } 00:26:09.601 ] 00:26:09.601 }' 00:26:09.601 12:57:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:09.601 12:57:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:09.870 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:09.870 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:09.870 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.871 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:09.871 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.871 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:26:09.871 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:26:09.871 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.871 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:09.871 [2024-12-05 12:57:52.300393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:09.871 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.871 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:09.871 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:09.871 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:09.871 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:09.871 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:09.871 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:09.871 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:09.871 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:09.871 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:09.871 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:09.871 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:09.871 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:09.871 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.871 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:09.871 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.871 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:09.871 "name": "Existed_Raid", 00:26:09.871 "uuid": "dbefe754-68af-4ea7-a2df-515139239999", 00:26:09.871 "strip_size_kb": 64, 00:26:09.871 "state": "configuring", 00:26:09.871 "raid_level": "raid5f", 00:26:09.871 "superblock": true, 00:26:09.871 "num_base_bdevs": 4, 00:26:09.871 "num_base_bdevs_discovered": 3, 00:26:09.871 "num_base_bdevs_operational": 4, 00:26:09.871 "base_bdevs_list": [ 00:26:09.871 { 00:26:09.871 "name": "BaseBdev1", 00:26:09.871 "uuid": "435856e5-e444-4f14-abc6-389f2386e7d9", 00:26:09.871 "is_configured": true, 00:26:09.871 "data_offset": 2048, 00:26:09.871 "data_size": 63488 00:26:09.871 }, 00:26:09.871 { 00:26:09.871 "name": null, 00:26:09.871 "uuid": "a1a3aae5-921d-4415-be68-8c1d8f5675fc", 00:26:09.871 "is_configured": false, 00:26:09.871 "data_offset": 0, 00:26:09.871 "data_size": 63488 00:26:09.871 }, 00:26:09.871 { 00:26:09.871 "name": "BaseBdev3", 00:26:09.871 "uuid": "c4e61c38-60af-4bca-ab1e-0d729a18ed03", 00:26:09.871 "is_configured": true, 00:26:09.871 "data_offset": 2048, 00:26:09.871 "data_size": 63488 00:26:09.871 }, 00:26:09.871 { 00:26:09.871 "name": "BaseBdev4", 00:26:09.871 "uuid": "878da251-2f1b-49ec-9bf6-18bb465e01c3", 00:26:09.871 "is_configured": true, 00:26:09.871 "data_offset": 2048, 00:26:09.871 "data_size": 63488 00:26:09.871 } 00:26:09.871 ] 00:26:09.871 }' 00:26:09.871 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:09.871 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:10.129 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:10.129 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:10.129 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.129 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:10.129 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.129 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:26:10.129 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:10.129 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.129 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:10.129 [2024-12-05 12:57:52.644513] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:10.129 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.129 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:10.129 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:10.129 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:10.129 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:10.129 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:10.129 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:10.129 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:10.129 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:10.129 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:10.129 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:10.129 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:10.129 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.129 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:10.129 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:10.387 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.387 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:10.387 "name": "Existed_Raid", 00:26:10.387 "uuid": "dbefe754-68af-4ea7-a2df-515139239999", 00:26:10.387 "strip_size_kb": 64, 00:26:10.387 "state": "configuring", 00:26:10.387 "raid_level": "raid5f", 00:26:10.387 "superblock": true, 00:26:10.387 "num_base_bdevs": 4, 00:26:10.387 "num_base_bdevs_discovered": 2, 00:26:10.387 "num_base_bdevs_operational": 4, 00:26:10.387 "base_bdevs_list": [ 00:26:10.387 { 00:26:10.387 "name": null, 00:26:10.387 "uuid": "435856e5-e444-4f14-abc6-389f2386e7d9", 00:26:10.387 "is_configured": false, 00:26:10.387 "data_offset": 0, 00:26:10.387 "data_size": 63488 00:26:10.387 }, 00:26:10.387 { 00:26:10.387 "name": null, 00:26:10.387 "uuid": "a1a3aae5-921d-4415-be68-8c1d8f5675fc", 00:26:10.387 "is_configured": false, 00:26:10.387 "data_offset": 0, 00:26:10.387 "data_size": 63488 00:26:10.387 }, 00:26:10.387 { 00:26:10.387 "name": "BaseBdev3", 00:26:10.387 "uuid": "c4e61c38-60af-4bca-ab1e-0d729a18ed03", 00:26:10.387 "is_configured": true, 00:26:10.387 "data_offset": 2048, 00:26:10.387 "data_size": 63488 00:26:10.387 }, 00:26:10.387 { 00:26:10.387 "name": "BaseBdev4", 00:26:10.387 "uuid": "878da251-2f1b-49ec-9bf6-18bb465e01c3", 00:26:10.387 "is_configured": true, 00:26:10.387 "data_offset": 2048, 00:26:10.387 "data_size": 63488 00:26:10.387 } 00:26:10.387 ] 00:26:10.387 }' 00:26:10.387 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:10.387 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:10.645 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:10.645 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.645 12:57:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:10.645 12:57:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:10.645 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.645 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:26:10.645 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:10.645 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.645 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:10.645 [2024-12-05 12:57:53.036273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:10.645 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.645 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:10.645 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:10.645 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:10.645 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:10.645 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:10.646 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:10.646 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:10.646 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:10.646 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:10.646 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:10.646 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:10.646 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:10.646 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.646 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:10.646 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.646 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:10.646 "name": "Existed_Raid", 00:26:10.646 "uuid": "dbefe754-68af-4ea7-a2df-515139239999", 00:26:10.646 "strip_size_kb": 64, 00:26:10.646 "state": "configuring", 00:26:10.646 "raid_level": "raid5f", 00:26:10.646 "superblock": true, 00:26:10.646 "num_base_bdevs": 4, 00:26:10.646 "num_base_bdevs_discovered": 3, 00:26:10.646 "num_base_bdevs_operational": 4, 00:26:10.646 "base_bdevs_list": [ 00:26:10.646 { 00:26:10.646 "name": null, 00:26:10.646 "uuid": "435856e5-e444-4f14-abc6-389f2386e7d9", 00:26:10.646 "is_configured": false, 00:26:10.646 "data_offset": 0, 00:26:10.646 "data_size": 63488 00:26:10.646 }, 00:26:10.646 { 00:26:10.646 "name": "BaseBdev2", 00:26:10.646 "uuid": "a1a3aae5-921d-4415-be68-8c1d8f5675fc", 00:26:10.646 "is_configured": true, 00:26:10.646 "data_offset": 2048, 00:26:10.646 "data_size": 63488 00:26:10.646 }, 00:26:10.646 { 00:26:10.646 "name": "BaseBdev3", 00:26:10.646 "uuid": "c4e61c38-60af-4bca-ab1e-0d729a18ed03", 00:26:10.646 "is_configured": true, 00:26:10.646 "data_offset": 2048, 00:26:10.646 "data_size": 63488 00:26:10.646 }, 00:26:10.646 { 00:26:10.646 "name": "BaseBdev4", 00:26:10.646 "uuid": "878da251-2f1b-49ec-9bf6-18bb465e01c3", 00:26:10.646 "is_configured": true, 00:26:10.646 "data_offset": 2048, 00:26:10.646 "data_size": 63488 00:26:10.646 } 00:26:10.646 ] 00:26:10.646 }' 00:26:10.646 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:10.646 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:10.904 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:10.904 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.904 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:10.904 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:10.904 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.904 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:26:10.904 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:10.904 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.904 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:10.904 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:10.904 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.904 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 435856e5-e444-4f14-abc6-389f2386e7d9 00:26:10.904 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.904 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:10.904 [2024-12-05 12:57:53.430793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:10.904 NewBaseBdev 00:26:10.904 [2024-12-05 12:57:53.431119] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:26:10.904 [2024-12-05 12:57:53.431136] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:10.904 [2024-12-05 12:57:53.431385] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:26:10.904 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.904 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:26:10.904 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:26:10.904 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:10.904 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:26:10.904 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:10.904 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:10.904 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:10.904 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.904 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:10.904 [2024-12-05 12:57:53.436019] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:26:10.904 [2024-12-05 12:57:53.436040] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:26:10.904 [2024-12-05 12:57:53.436246] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:10.904 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.904 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:10.904 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.904 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:10.904 [ 00:26:10.904 { 00:26:10.904 "name": "NewBaseBdev", 00:26:10.904 "aliases": [ 00:26:10.904 "435856e5-e444-4f14-abc6-389f2386e7d9" 00:26:10.904 ], 00:26:10.904 "product_name": "Malloc disk", 00:26:10.904 "block_size": 512, 00:26:10.904 "num_blocks": 65536, 00:26:10.904 "uuid": "435856e5-e444-4f14-abc6-389f2386e7d9", 00:26:10.904 "assigned_rate_limits": { 00:26:10.904 "rw_ios_per_sec": 0, 00:26:10.904 "rw_mbytes_per_sec": 0, 00:26:10.904 "r_mbytes_per_sec": 0, 00:26:10.904 "w_mbytes_per_sec": 0 00:26:10.904 }, 00:26:10.904 "claimed": true, 00:26:10.904 "claim_type": "exclusive_write", 00:26:10.904 "zoned": false, 00:26:10.904 "supported_io_types": { 00:26:10.904 "read": true, 00:26:10.904 "write": true, 00:26:10.904 "unmap": true, 00:26:10.904 "flush": true, 00:26:10.904 "reset": true, 00:26:10.904 "nvme_admin": false, 00:26:10.904 "nvme_io": false, 00:26:10.904 "nvme_io_md": false, 00:26:10.904 "write_zeroes": true, 00:26:10.904 "zcopy": true, 00:26:10.904 "get_zone_info": false, 00:26:10.904 "zone_management": false, 00:26:10.904 "zone_append": false, 00:26:10.904 "compare": false, 00:26:10.904 "compare_and_write": false, 00:26:10.904 "abort": true, 00:26:10.904 "seek_hole": false, 00:26:10.904 "seek_data": false, 00:26:10.904 "copy": true, 00:26:10.904 "nvme_iov_md": false 00:26:10.904 }, 00:26:10.904 "memory_domains": [ 00:26:10.904 { 00:26:10.904 "dma_device_id": "system", 00:26:10.904 "dma_device_type": 1 00:26:10.904 }, 00:26:10.904 { 00:26:10.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:10.904 "dma_device_type": 2 00:26:10.904 } 00:26:10.904 ], 00:26:10.905 "driver_specific": {} 00:26:10.905 } 00:26:10.905 ] 00:26:10.905 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.905 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:26:10.905 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:26:10.905 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:10.905 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:10.905 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:10.905 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:10.905 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:10.905 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:10.905 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:10.905 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:10.905 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:10.905 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:10.905 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:10.905 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.905 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:10.905 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.905 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:10.905 "name": "Existed_Raid", 00:26:10.905 "uuid": "dbefe754-68af-4ea7-a2df-515139239999", 00:26:10.905 "strip_size_kb": 64, 00:26:10.905 "state": "online", 00:26:10.905 "raid_level": "raid5f", 00:26:10.905 "superblock": true, 00:26:10.905 "num_base_bdevs": 4, 00:26:10.905 "num_base_bdevs_discovered": 4, 00:26:10.905 "num_base_bdevs_operational": 4, 00:26:10.905 "base_bdevs_list": [ 00:26:10.905 { 00:26:10.905 "name": "NewBaseBdev", 00:26:10.905 "uuid": "435856e5-e444-4f14-abc6-389f2386e7d9", 00:26:10.905 "is_configured": true, 00:26:10.905 "data_offset": 2048, 00:26:10.905 "data_size": 63488 00:26:10.905 }, 00:26:10.905 { 00:26:10.905 "name": "BaseBdev2", 00:26:10.905 "uuid": "a1a3aae5-921d-4415-be68-8c1d8f5675fc", 00:26:10.905 "is_configured": true, 00:26:10.905 "data_offset": 2048, 00:26:10.905 "data_size": 63488 00:26:10.905 }, 00:26:10.905 { 00:26:10.905 "name": "BaseBdev3", 00:26:10.905 "uuid": "c4e61c38-60af-4bca-ab1e-0d729a18ed03", 00:26:10.905 "is_configured": true, 00:26:10.905 "data_offset": 2048, 00:26:10.905 "data_size": 63488 00:26:10.905 }, 00:26:10.905 { 00:26:10.905 "name": "BaseBdev4", 00:26:10.905 "uuid": "878da251-2f1b-49ec-9bf6-18bb465e01c3", 00:26:10.905 "is_configured": true, 00:26:10.905 "data_offset": 2048, 00:26:10.905 "data_size": 63488 00:26:10.905 } 00:26:10.905 ] 00:26:10.905 }' 00:26:10.905 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:10.905 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:11.471 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:26:11.471 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:11.471 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:11.471 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:11.471 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:26:11.471 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:11.471 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:11.471 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.471 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:11.471 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:11.471 [2024-12-05 12:57:53.765795] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:11.471 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.471 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:11.471 "name": "Existed_Raid", 00:26:11.471 "aliases": [ 00:26:11.471 "dbefe754-68af-4ea7-a2df-515139239999" 00:26:11.471 ], 00:26:11.471 "product_name": "Raid Volume", 00:26:11.471 "block_size": 512, 00:26:11.471 "num_blocks": 190464, 00:26:11.471 "uuid": "dbefe754-68af-4ea7-a2df-515139239999", 00:26:11.471 "assigned_rate_limits": { 00:26:11.471 "rw_ios_per_sec": 0, 00:26:11.471 "rw_mbytes_per_sec": 0, 00:26:11.471 "r_mbytes_per_sec": 0, 00:26:11.471 "w_mbytes_per_sec": 0 00:26:11.471 }, 00:26:11.471 "claimed": false, 00:26:11.471 "zoned": false, 00:26:11.471 "supported_io_types": { 00:26:11.471 "read": true, 00:26:11.471 "write": true, 00:26:11.471 "unmap": false, 00:26:11.471 "flush": false, 00:26:11.471 "reset": true, 00:26:11.471 "nvme_admin": false, 00:26:11.471 "nvme_io": false, 00:26:11.471 "nvme_io_md": false, 00:26:11.471 "write_zeroes": true, 00:26:11.471 "zcopy": false, 00:26:11.471 "get_zone_info": false, 00:26:11.471 "zone_management": false, 00:26:11.471 "zone_append": false, 00:26:11.471 "compare": false, 00:26:11.471 "compare_and_write": false, 00:26:11.471 "abort": false, 00:26:11.471 "seek_hole": false, 00:26:11.471 "seek_data": false, 00:26:11.471 "copy": false, 00:26:11.471 "nvme_iov_md": false 00:26:11.471 }, 00:26:11.471 "driver_specific": { 00:26:11.471 "raid": { 00:26:11.471 "uuid": "dbefe754-68af-4ea7-a2df-515139239999", 00:26:11.471 "strip_size_kb": 64, 00:26:11.472 "state": "online", 00:26:11.472 "raid_level": "raid5f", 00:26:11.472 "superblock": true, 00:26:11.472 "num_base_bdevs": 4, 00:26:11.472 "num_base_bdevs_discovered": 4, 00:26:11.472 "num_base_bdevs_operational": 4, 00:26:11.472 "base_bdevs_list": [ 00:26:11.472 { 00:26:11.472 "name": "NewBaseBdev", 00:26:11.472 "uuid": "435856e5-e444-4f14-abc6-389f2386e7d9", 00:26:11.472 "is_configured": true, 00:26:11.472 "data_offset": 2048, 00:26:11.472 "data_size": 63488 00:26:11.472 }, 00:26:11.472 { 00:26:11.472 "name": "BaseBdev2", 00:26:11.472 "uuid": "a1a3aae5-921d-4415-be68-8c1d8f5675fc", 00:26:11.472 "is_configured": true, 00:26:11.472 "data_offset": 2048, 00:26:11.472 "data_size": 63488 00:26:11.472 }, 00:26:11.472 { 00:26:11.472 "name": "BaseBdev3", 00:26:11.472 "uuid": "c4e61c38-60af-4bca-ab1e-0d729a18ed03", 00:26:11.472 "is_configured": true, 00:26:11.472 "data_offset": 2048, 00:26:11.472 "data_size": 63488 00:26:11.472 }, 00:26:11.472 { 00:26:11.472 "name": "BaseBdev4", 00:26:11.472 "uuid": "878da251-2f1b-49ec-9bf6-18bb465e01c3", 00:26:11.472 "is_configured": true, 00:26:11.472 "data_offset": 2048, 00:26:11.472 "data_size": 63488 00:26:11.472 } 00:26:11.472 ] 00:26:11.472 } 00:26:11.472 } 00:26:11.472 }' 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:26:11.472 BaseBdev2 00:26:11.472 BaseBdev3 00:26:11.472 BaseBdev4' 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:11.472 [2024-12-05 12:57:53.989603] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:11.472 [2024-12-05 12:57:53.989628] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:11.472 [2024-12-05 12:57:53.989688] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:11.472 [2024-12-05 12:57:53.989981] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:11.472 [2024-12-05 12:57:53.989991] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80828 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80828 ']' 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80828 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:11.472 12:57:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80828 00:26:11.472 killing process with pid 80828 00:26:11.472 12:57:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:11.472 12:57:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:11.472 12:57:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80828' 00:26:11.472 12:57:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80828 00:26:11.472 [2024-12-05 12:57:54.023055] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:11.472 12:57:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80828 00:26:11.730 [2024-12-05 12:57:54.270897] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:12.666 12:57:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:26:12.666 00:26:12.666 real 0m8.329s 00:26:12.666 user 0m13.180s 00:26:12.666 sys 0m1.385s 00:26:12.666 12:57:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:12.666 12:57:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:12.666 ************************************ 00:26:12.666 END TEST raid5f_state_function_test_sb 00:26:12.666 ************************************ 00:26:12.666 12:57:55 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:26:12.666 12:57:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:12.666 12:57:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:12.666 12:57:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:12.666 ************************************ 00:26:12.666 START TEST raid5f_superblock_test 00:26:12.666 ************************************ 00:26:12.666 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:26:12.666 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:26:12.666 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:26:12.666 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:26:12.666 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:26:12.666 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:26:12.666 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:26:12.666 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:26:12.666 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:26:12.666 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:26:12.666 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:26:12.666 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:26:12.666 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:26:12.666 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:26:12.666 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:26:12.667 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:26:12.667 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:26:12.667 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81464 00:26:12.667 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81464 00:26:12.667 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81464 ']' 00:26:12.667 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:12.667 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:12.667 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:12.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:12.667 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:12.667 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:26:12.667 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.667 [2024-12-05 12:57:55.112605] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:26:12.667 [2024-12-05 12:57:55.112848] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81464 ] 00:26:12.923 [2024-12-05 12:57:55.272138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.923 [2024-12-05 12:57:55.372657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.181 [2024-12-05 12:57:55.511326] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:13.181 [2024-12-05 12:57:55.511524] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:13.438 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:13.438 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:26:13.438 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:26:13.438 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:13.438 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:26:13.438 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:26:13.438 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:13.438 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:13.438 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:13.438 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:13.438 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:26:13.439 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.439 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.439 malloc1 00:26:13.439 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.439 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:13.439 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.439 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.439 [2024-12-05 12:57:55.992031] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:13.439 [2024-12-05 12:57:55.992093] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:13.439 [2024-12-05 12:57:55.992114] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:26:13.439 [2024-12-05 12:57:55.992123] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:13.439 [2024-12-05 12:57:55.994344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:13.439 [2024-12-05 12:57:55.994496] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:13.439 pt1 00:26:13.439 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.439 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:13.439 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:13.439 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:26:13.439 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:26:13.439 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:13.439 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:13.439 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:13.439 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:13.439 12:57:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:26:13.439 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.439 12:57:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.696 malloc2 00:26:13.696 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.696 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:13.696 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.696 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.696 [2024-12-05 12:57:56.028352] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:13.696 [2024-12-05 12:57:56.028407] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:13.696 [2024-12-05 12:57:56.028429] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:26:13.696 [2024-12-05 12:57:56.028438] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:13.696 [2024-12-05 12:57:56.030549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:13.696 [2024-12-05 12:57:56.030581] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:13.696 pt2 00:26:13.696 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.696 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:13.696 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.697 malloc3 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.697 [2024-12-05 12:57:56.075045] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:13.697 [2024-12-05 12:57:56.075099] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:13.697 [2024-12-05 12:57:56.075120] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:26:13.697 [2024-12-05 12:57:56.075129] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:13.697 [2024-12-05 12:57:56.077259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:13.697 [2024-12-05 12:57:56.077295] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:13.697 pt3 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.697 malloc4 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.697 [2024-12-05 12:57:56.115299] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:13.697 [2024-12-05 12:57:56.115351] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:13.697 [2024-12-05 12:57:56.115368] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:26:13.697 [2024-12-05 12:57:56.115377] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:13.697 [2024-12-05 12:57:56.117505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:13.697 [2024-12-05 12:57:56.117537] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:13.697 pt4 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.697 [2024-12-05 12:57:56.123336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:13.697 [2024-12-05 12:57:56.125194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:13.697 [2024-12-05 12:57:56.125388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:13.697 [2024-12-05 12:57:56.125441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:13.697 [2024-12-05 12:57:56.125647] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:26:13.697 [2024-12-05 12:57:56.125662] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:13.697 [2024-12-05 12:57:56.125912] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:26:13.697 [2024-12-05 12:57:56.130924] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:26:13.697 [2024-12-05 12:57:56.130945] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:26:13.697 [2024-12-05 12:57:56.131117] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:13.697 "name": "raid_bdev1", 00:26:13.697 "uuid": "96508a19-4022-4959-b52c-1167523e2c92", 00:26:13.697 "strip_size_kb": 64, 00:26:13.697 "state": "online", 00:26:13.697 "raid_level": "raid5f", 00:26:13.697 "superblock": true, 00:26:13.697 "num_base_bdevs": 4, 00:26:13.697 "num_base_bdevs_discovered": 4, 00:26:13.697 "num_base_bdevs_operational": 4, 00:26:13.697 "base_bdevs_list": [ 00:26:13.697 { 00:26:13.697 "name": "pt1", 00:26:13.697 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:13.697 "is_configured": true, 00:26:13.697 "data_offset": 2048, 00:26:13.697 "data_size": 63488 00:26:13.697 }, 00:26:13.697 { 00:26:13.697 "name": "pt2", 00:26:13.697 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:13.697 "is_configured": true, 00:26:13.697 "data_offset": 2048, 00:26:13.697 "data_size": 63488 00:26:13.697 }, 00:26:13.697 { 00:26:13.697 "name": "pt3", 00:26:13.697 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:13.697 "is_configured": true, 00:26:13.697 "data_offset": 2048, 00:26:13.697 "data_size": 63488 00:26:13.697 }, 00:26:13.697 { 00:26:13.697 "name": "pt4", 00:26:13.697 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:13.697 "is_configured": true, 00:26:13.697 "data_offset": 2048, 00:26:13.697 "data_size": 63488 00:26:13.697 } 00:26:13.697 ] 00:26:13.697 }' 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:13.697 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.955 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:26:13.955 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:26:13.955 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:13.955 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:13.955 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:13.955 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:13.955 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:13.955 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:13.955 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.955 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.955 [2024-12-05 12:57:56.444710] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:13.955 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.955 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:13.955 "name": "raid_bdev1", 00:26:13.955 "aliases": [ 00:26:13.955 "96508a19-4022-4959-b52c-1167523e2c92" 00:26:13.955 ], 00:26:13.955 "product_name": "Raid Volume", 00:26:13.955 "block_size": 512, 00:26:13.955 "num_blocks": 190464, 00:26:13.955 "uuid": "96508a19-4022-4959-b52c-1167523e2c92", 00:26:13.955 "assigned_rate_limits": { 00:26:13.955 "rw_ios_per_sec": 0, 00:26:13.955 "rw_mbytes_per_sec": 0, 00:26:13.955 "r_mbytes_per_sec": 0, 00:26:13.955 "w_mbytes_per_sec": 0 00:26:13.955 }, 00:26:13.956 "claimed": false, 00:26:13.956 "zoned": false, 00:26:13.956 "supported_io_types": { 00:26:13.956 "read": true, 00:26:13.956 "write": true, 00:26:13.956 "unmap": false, 00:26:13.956 "flush": false, 00:26:13.956 "reset": true, 00:26:13.956 "nvme_admin": false, 00:26:13.956 "nvme_io": false, 00:26:13.956 "nvme_io_md": false, 00:26:13.956 "write_zeroes": true, 00:26:13.956 "zcopy": false, 00:26:13.956 "get_zone_info": false, 00:26:13.956 "zone_management": false, 00:26:13.956 "zone_append": false, 00:26:13.956 "compare": false, 00:26:13.956 "compare_and_write": false, 00:26:13.956 "abort": false, 00:26:13.956 "seek_hole": false, 00:26:13.956 "seek_data": false, 00:26:13.956 "copy": false, 00:26:13.956 "nvme_iov_md": false 00:26:13.956 }, 00:26:13.956 "driver_specific": { 00:26:13.956 "raid": { 00:26:13.956 "uuid": "96508a19-4022-4959-b52c-1167523e2c92", 00:26:13.956 "strip_size_kb": 64, 00:26:13.956 "state": "online", 00:26:13.956 "raid_level": "raid5f", 00:26:13.956 "superblock": true, 00:26:13.956 "num_base_bdevs": 4, 00:26:13.956 "num_base_bdevs_discovered": 4, 00:26:13.956 "num_base_bdevs_operational": 4, 00:26:13.956 "base_bdevs_list": [ 00:26:13.956 { 00:26:13.956 "name": "pt1", 00:26:13.956 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:13.956 "is_configured": true, 00:26:13.956 "data_offset": 2048, 00:26:13.956 "data_size": 63488 00:26:13.956 }, 00:26:13.956 { 00:26:13.956 "name": "pt2", 00:26:13.956 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:13.956 "is_configured": true, 00:26:13.956 "data_offset": 2048, 00:26:13.956 "data_size": 63488 00:26:13.956 }, 00:26:13.956 { 00:26:13.956 "name": "pt3", 00:26:13.956 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:13.956 "is_configured": true, 00:26:13.956 "data_offset": 2048, 00:26:13.956 "data_size": 63488 00:26:13.956 }, 00:26:13.956 { 00:26:13.956 "name": "pt4", 00:26:13.956 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:13.956 "is_configured": true, 00:26:13.956 "data_offset": 2048, 00:26:13.956 "data_size": 63488 00:26:13.956 } 00:26:13.956 ] 00:26:13.956 } 00:26:13.956 } 00:26:13.956 }' 00:26:13.956 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:13.956 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:26:13.956 pt2 00:26:13.956 pt3 00:26:13.956 pt4' 00:26:13.956 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:13.956 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:13.956 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:14.213 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:26:14.213 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.213 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:14.213 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.213 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.213 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:14.213 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:14.213 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:14.213 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:26:14.213 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.213 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.214 [2024-12-05 12:57:56.680706] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=96508a19-4022-4959-b52c-1167523e2c92 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 96508a19-4022-4959-b52c-1167523e2c92 ']' 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.214 [2024-12-05 12:57:56.712535] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:14.214 [2024-12-05 12:57:56.712634] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:14.214 [2024-12-05 12:57:56.712753] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:14.214 [2024-12-05 12:57:56.712856] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:14.214 [2024-12-05 12:57:56.712926] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.214 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.472 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.472 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:26:14.472 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:26:14.472 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:26:14.472 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:26:14.472 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:14.472 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:14.472 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:14.472 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:14.472 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:26:14.472 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.472 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.472 [2024-12-05 12:57:56.828586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:14.472 [2024-12-05 12:57:56.830407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:14.472 [2024-12-05 12:57:56.830453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:26:14.472 [2024-12-05 12:57:56.830486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:26:14.472 [2024-12-05 12:57:56.830546] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:26:14.472 [2024-12-05 12:57:56.830588] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:26:14.472 [2024-12-05 12:57:56.830607] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:26:14.472 [2024-12-05 12:57:56.830625] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:26:14.472 [2024-12-05 12:57:56.830638] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:14.472 [2024-12-05 12:57:56.830649] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:26:14.472 request: 00:26:14.472 { 00:26:14.472 "name": "raid_bdev1", 00:26:14.472 "raid_level": "raid5f", 00:26:14.472 "base_bdevs": [ 00:26:14.472 "malloc1", 00:26:14.472 "malloc2", 00:26:14.472 "malloc3", 00:26:14.472 "malloc4" 00:26:14.472 ], 00:26:14.472 "strip_size_kb": 64, 00:26:14.472 "superblock": false, 00:26:14.472 "method": "bdev_raid_create", 00:26:14.472 "req_id": 1 00:26:14.472 } 00:26:14.472 Got JSON-RPC error response 00:26:14.472 response: 00:26:14.472 { 00:26:14.472 "code": -17, 00:26:14.472 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:14.472 } 00:26:14.472 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:14.472 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:26:14.472 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:14.472 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:14.472 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:14.472 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:14.472 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.472 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.472 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:26:14.472 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.472 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:26:14.472 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:26:14.472 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:14.472 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.472 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.472 [2024-12-05 12:57:56.872566] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:14.472 [2024-12-05 12:57:56.872607] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:14.472 [2024-12-05 12:57:56.872620] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:26:14.472 [2024-12-05 12:57:56.872630] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:14.472 [2024-12-05 12:57:56.874746] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:14.472 [2024-12-05 12:57:56.874783] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:14.472 [2024-12-05 12:57:56.874849] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:14.472 [2024-12-05 12:57:56.874892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:14.472 pt1 00:26:14.472 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.472 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:26:14.472 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:14.473 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:14.473 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:14.473 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:14.473 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:14.473 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:14.473 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:14.473 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:14.473 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:14.473 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:14.473 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:14.473 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.473 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.473 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.473 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:14.473 "name": "raid_bdev1", 00:26:14.473 "uuid": "96508a19-4022-4959-b52c-1167523e2c92", 00:26:14.473 "strip_size_kb": 64, 00:26:14.473 "state": "configuring", 00:26:14.473 "raid_level": "raid5f", 00:26:14.473 "superblock": true, 00:26:14.473 "num_base_bdevs": 4, 00:26:14.473 "num_base_bdevs_discovered": 1, 00:26:14.473 "num_base_bdevs_operational": 4, 00:26:14.473 "base_bdevs_list": [ 00:26:14.473 { 00:26:14.473 "name": "pt1", 00:26:14.473 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:14.473 "is_configured": true, 00:26:14.473 "data_offset": 2048, 00:26:14.473 "data_size": 63488 00:26:14.473 }, 00:26:14.473 { 00:26:14.473 "name": null, 00:26:14.473 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:14.473 "is_configured": false, 00:26:14.473 "data_offset": 2048, 00:26:14.473 "data_size": 63488 00:26:14.473 }, 00:26:14.473 { 00:26:14.473 "name": null, 00:26:14.473 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:14.473 "is_configured": false, 00:26:14.473 "data_offset": 2048, 00:26:14.473 "data_size": 63488 00:26:14.473 }, 00:26:14.473 { 00:26:14.473 "name": null, 00:26:14.473 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:14.473 "is_configured": false, 00:26:14.473 "data_offset": 2048, 00:26:14.473 "data_size": 63488 00:26:14.473 } 00:26:14.473 ] 00:26:14.473 }' 00:26:14.473 12:57:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:14.473 12:57:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.730 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:26:14.730 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:14.730 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.730 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.730 [2024-12-05 12:57:57.196670] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:14.730 [2024-12-05 12:57:57.196730] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:14.730 [2024-12-05 12:57:57.196746] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:26:14.730 [2024-12-05 12:57:57.196756] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:14.730 [2024-12-05 12:57:57.197143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:14.730 [2024-12-05 12:57:57.197158] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:14.730 [2024-12-05 12:57:57.197223] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:14.730 [2024-12-05 12:57:57.197244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:14.730 pt2 00:26:14.730 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.730 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:26:14.730 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.730 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.730 [2024-12-05 12:57:57.204672] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:26:14.730 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.730 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:26:14.730 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:14.730 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:14.730 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:14.730 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:14.730 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:14.730 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:14.730 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:14.730 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:14.730 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:14.731 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:14.731 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.731 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:14.731 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.731 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.731 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:14.731 "name": "raid_bdev1", 00:26:14.731 "uuid": "96508a19-4022-4959-b52c-1167523e2c92", 00:26:14.731 "strip_size_kb": 64, 00:26:14.731 "state": "configuring", 00:26:14.731 "raid_level": "raid5f", 00:26:14.731 "superblock": true, 00:26:14.731 "num_base_bdevs": 4, 00:26:14.731 "num_base_bdevs_discovered": 1, 00:26:14.731 "num_base_bdevs_operational": 4, 00:26:14.731 "base_bdevs_list": [ 00:26:14.731 { 00:26:14.731 "name": "pt1", 00:26:14.731 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:14.731 "is_configured": true, 00:26:14.731 "data_offset": 2048, 00:26:14.731 "data_size": 63488 00:26:14.731 }, 00:26:14.731 { 00:26:14.731 "name": null, 00:26:14.731 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:14.731 "is_configured": false, 00:26:14.731 "data_offset": 0, 00:26:14.731 "data_size": 63488 00:26:14.731 }, 00:26:14.731 { 00:26:14.731 "name": null, 00:26:14.731 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:14.731 "is_configured": false, 00:26:14.731 "data_offset": 2048, 00:26:14.731 "data_size": 63488 00:26:14.731 }, 00:26:14.731 { 00:26:14.731 "name": null, 00:26:14.731 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:14.731 "is_configured": false, 00:26:14.731 "data_offset": 2048, 00:26:14.731 "data_size": 63488 00:26:14.731 } 00:26:14.731 ] 00:26:14.731 }' 00:26:14.731 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:14.731 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.988 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:26:14.988 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:14.988 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:14.988 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.988 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.988 [2024-12-05 12:57:57.496740] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:14.988 [2024-12-05 12:57:57.496796] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:14.988 [2024-12-05 12:57:57.496813] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:26:14.988 [2024-12-05 12:57:57.496821] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:14.988 [2024-12-05 12:57:57.497216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:14.988 [2024-12-05 12:57:57.497229] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:14.988 [2024-12-05 12:57:57.497298] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:14.988 [2024-12-05 12:57:57.497316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:14.988 pt2 00:26:14.988 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.988 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:26:14.988 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:14.988 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:14.988 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.988 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.988 [2024-12-05 12:57:57.504726] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:14.988 [2024-12-05 12:57:57.504768] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:14.988 [2024-12-05 12:57:57.504786] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:26:14.988 [2024-12-05 12:57:57.504794] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:14.988 [2024-12-05 12:57:57.505125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:14.988 [2024-12-05 12:57:57.505142] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:14.989 [2024-12-05 12:57:57.505196] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:26:14.989 [2024-12-05 12:57:57.505220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:14.989 pt3 00:26:14.989 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.989 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:26:14.989 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:14.989 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:14.989 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.989 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.989 [2024-12-05 12:57:57.512709] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:14.989 [2024-12-05 12:57:57.512746] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:14.989 [2024-12-05 12:57:57.512760] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:26:14.989 [2024-12-05 12:57:57.512767] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:14.989 [2024-12-05 12:57:57.513108] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:14.989 [2024-12-05 12:57:57.513126] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:14.989 [2024-12-05 12:57:57.513178] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:26:14.989 [2024-12-05 12:57:57.513195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:14.989 [2024-12-05 12:57:57.513320] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:14.989 [2024-12-05 12:57:57.513333] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:14.989 [2024-12-05 12:57:57.513570] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:26:14.989 [2024-12-05 12:57:57.518083] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:14.989 [2024-12-05 12:57:57.518105] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:26:14.989 [2024-12-05 12:57:57.518261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:14.989 pt4 00:26:14.989 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.989 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:26:14.989 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:14.989 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:14.989 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:14.989 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:14.989 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:14.989 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:14.989 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:14.989 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:14.989 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:14.989 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:14.989 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:14.989 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:14.989 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:14.989 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.989 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.989 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.989 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:14.989 "name": "raid_bdev1", 00:26:14.989 "uuid": "96508a19-4022-4959-b52c-1167523e2c92", 00:26:14.989 "strip_size_kb": 64, 00:26:14.989 "state": "online", 00:26:14.989 "raid_level": "raid5f", 00:26:14.989 "superblock": true, 00:26:14.989 "num_base_bdevs": 4, 00:26:14.989 "num_base_bdevs_discovered": 4, 00:26:14.989 "num_base_bdevs_operational": 4, 00:26:14.989 "base_bdevs_list": [ 00:26:14.989 { 00:26:14.989 "name": "pt1", 00:26:14.989 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:14.989 "is_configured": true, 00:26:14.989 "data_offset": 2048, 00:26:14.989 "data_size": 63488 00:26:14.989 }, 00:26:14.989 { 00:26:14.989 "name": "pt2", 00:26:14.989 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:14.989 "is_configured": true, 00:26:14.989 "data_offset": 2048, 00:26:14.989 "data_size": 63488 00:26:14.989 }, 00:26:14.989 { 00:26:14.989 "name": "pt3", 00:26:14.989 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:14.989 "is_configured": true, 00:26:14.989 "data_offset": 2048, 00:26:14.989 "data_size": 63488 00:26:14.989 }, 00:26:14.989 { 00:26:14.989 "name": "pt4", 00:26:14.989 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:14.989 "is_configured": true, 00:26:14.989 "data_offset": 2048, 00:26:14.989 "data_size": 63488 00:26:14.989 } 00:26:14.989 ] 00:26:14.989 }' 00:26:14.989 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:14.989 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.553 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:26:15.553 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:26:15.553 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:15.553 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:15.553 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:15.553 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:15.553 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:15.553 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:15.553 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.553 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.553 [2024-12-05 12:57:57.839750] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:15.553 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.553 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:15.553 "name": "raid_bdev1", 00:26:15.553 "aliases": [ 00:26:15.553 "96508a19-4022-4959-b52c-1167523e2c92" 00:26:15.553 ], 00:26:15.553 "product_name": "Raid Volume", 00:26:15.553 "block_size": 512, 00:26:15.553 "num_blocks": 190464, 00:26:15.553 "uuid": "96508a19-4022-4959-b52c-1167523e2c92", 00:26:15.553 "assigned_rate_limits": { 00:26:15.553 "rw_ios_per_sec": 0, 00:26:15.553 "rw_mbytes_per_sec": 0, 00:26:15.553 "r_mbytes_per_sec": 0, 00:26:15.553 "w_mbytes_per_sec": 0 00:26:15.553 }, 00:26:15.553 "claimed": false, 00:26:15.553 "zoned": false, 00:26:15.553 "supported_io_types": { 00:26:15.553 "read": true, 00:26:15.553 "write": true, 00:26:15.553 "unmap": false, 00:26:15.553 "flush": false, 00:26:15.553 "reset": true, 00:26:15.553 "nvme_admin": false, 00:26:15.553 "nvme_io": false, 00:26:15.553 "nvme_io_md": false, 00:26:15.553 "write_zeroes": true, 00:26:15.553 "zcopy": false, 00:26:15.553 "get_zone_info": false, 00:26:15.553 "zone_management": false, 00:26:15.553 "zone_append": false, 00:26:15.553 "compare": false, 00:26:15.553 "compare_and_write": false, 00:26:15.553 "abort": false, 00:26:15.553 "seek_hole": false, 00:26:15.553 "seek_data": false, 00:26:15.553 "copy": false, 00:26:15.553 "nvme_iov_md": false 00:26:15.553 }, 00:26:15.553 "driver_specific": { 00:26:15.553 "raid": { 00:26:15.553 "uuid": "96508a19-4022-4959-b52c-1167523e2c92", 00:26:15.553 "strip_size_kb": 64, 00:26:15.553 "state": "online", 00:26:15.553 "raid_level": "raid5f", 00:26:15.553 "superblock": true, 00:26:15.553 "num_base_bdevs": 4, 00:26:15.553 "num_base_bdevs_discovered": 4, 00:26:15.553 "num_base_bdevs_operational": 4, 00:26:15.553 "base_bdevs_list": [ 00:26:15.553 { 00:26:15.553 "name": "pt1", 00:26:15.553 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:15.553 "is_configured": true, 00:26:15.553 "data_offset": 2048, 00:26:15.553 "data_size": 63488 00:26:15.553 }, 00:26:15.553 { 00:26:15.553 "name": "pt2", 00:26:15.553 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:15.553 "is_configured": true, 00:26:15.553 "data_offset": 2048, 00:26:15.553 "data_size": 63488 00:26:15.553 }, 00:26:15.553 { 00:26:15.553 "name": "pt3", 00:26:15.553 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:15.553 "is_configured": true, 00:26:15.553 "data_offset": 2048, 00:26:15.553 "data_size": 63488 00:26:15.553 }, 00:26:15.553 { 00:26:15.553 "name": "pt4", 00:26:15.553 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:15.553 "is_configured": true, 00:26:15.553 "data_offset": 2048, 00:26:15.553 "data_size": 63488 00:26:15.553 } 00:26:15.553 ] 00:26:15.553 } 00:26:15.553 } 00:26:15.553 }' 00:26:15.553 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:15.553 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:26:15.553 pt2 00:26:15.553 pt3 00:26:15.553 pt4' 00:26:15.553 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:15.553 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:15.553 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:15.553 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:15.553 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:26:15.553 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.553 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.553 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.553 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:15.553 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:15.553 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:15.553 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:15.553 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:26:15.553 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.553 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.553 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.553 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:15.553 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:15.553 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:15.553 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:26:15.553 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.553 12:57:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.553 12:57:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:15.553 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.553 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:15.553 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:15.553 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:15.553 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:26:15.553 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.553 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.553 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:15.553 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.553 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:15.553 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:15.553 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:15.553 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.554 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.554 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:26:15.554 [2024-12-05 12:57:58.071757] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:15.554 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.554 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 96508a19-4022-4959-b52c-1167523e2c92 '!=' 96508a19-4022-4959-b52c-1167523e2c92 ']' 00:26:15.554 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:26:15.554 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:15.554 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:26:15.554 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:26:15.554 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.554 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.554 [2024-12-05 12:57:58.099625] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:26:15.554 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.554 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:15.554 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:15.554 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:15.554 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:15.554 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:15.554 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:15.554 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:15.554 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:15.554 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:15.554 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:15.554 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:15.554 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.554 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.554 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:15.554 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.810 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:15.810 "name": "raid_bdev1", 00:26:15.810 "uuid": "96508a19-4022-4959-b52c-1167523e2c92", 00:26:15.810 "strip_size_kb": 64, 00:26:15.810 "state": "online", 00:26:15.810 "raid_level": "raid5f", 00:26:15.810 "superblock": true, 00:26:15.810 "num_base_bdevs": 4, 00:26:15.810 "num_base_bdevs_discovered": 3, 00:26:15.810 "num_base_bdevs_operational": 3, 00:26:15.810 "base_bdevs_list": [ 00:26:15.810 { 00:26:15.810 "name": null, 00:26:15.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.810 "is_configured": false, 00:26:15.810 "data_offset": 0, 00:26:15.810 "data_size": 63488 00:26:15.810 }, 00:26:15.810 { 00:26:15.810 "name": "pt2", 00:26:15.810 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:15.810 "is_configured": true, 00:26:15.810 "data_offset": 2048, 00:26:15.810 "data_size": 63488 00:26:15.810 }, 00:26:15.810 { 00:26:15.810 "name": "pt3", 00:26:15.810 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:15.810 "is_configured": true, 00:26:15.810 "data_offset": 2048, 00:26:15.810 "data_size": 63488 00:26:15.810 }, 00:26:15.810 { 00:26:15.810 "name": "pt4", 00:26:15.810 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:15.810 "is_configured": true, 00:26:15.810 "data_offset": 2048, 00:26:15.810 "data_size": 63488 00:26:15.810 } 00:26:15.810 ] 00:26:15.810 }' 00:26:15.810 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:15.810 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.067 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:16.067 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.067 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.067 [2024-12-05 12:57:58.427651] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:16.067 [2024-12-05 12:57:58.427674] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:16.067 [2024-12-05 12:57:58.427730] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:16.067 [2024-12-05 12:57:58.427794] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:16.067 [2024-12-05 12:57:58.427801] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:26:16.067 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.067 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:26:16.067 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:16.067 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.067 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.067 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.067 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:26:16.067 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:26:16.067 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:26:16.067 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:26:16.067 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:26:16.067 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.067 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.067 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.067 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:26:16.067 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:26:16.068 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:26:16.068 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.068 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.068 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.068 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:26:16.068 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:26:16.068 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:26:16.068 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.068 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.068 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.068 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:26:16.068 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:26:16.068 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:26:16.068 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:26:16.068 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:16.068 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.068 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.068 [2024-12-05 12:57:58.491666] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:16.068 [2024-12-05 12:57:58.491713] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:16.068 [2024-12-05 12:57:58.491727] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:26:16.068 [2024-12-05 12:57:58.491734] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:16.068 [2024-12-05 12:57:58.493621] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:16.068 [2024-12-05 12:57:58.493651] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:16.068 [2024-12-05 12:57:58.493713] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:16.068 [2024-12-05 12:57:58.493747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:16.068 pt2 00:26:16.068 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.068 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:26:16.068 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:16.068 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:16.068 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:16.068 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:16.068 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:16.068 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:16.068 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:16.068 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:16.068 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:16.068 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:16.068 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:16.068 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.068 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.068 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.068 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:16.068 "name": "raid_bdev1", 00:26:16.068 "uuid": "96508a19-4022-4959-b52c-1167523e2c92", 00:26:16.068 "strip_size_kb": 64, 00:26:16.068 "state": "configuring", 00:26:16.068 "raid_level": "raid5f", 00:26:16.068 "superblock": true, 00:26:16.068 "num_base_bdevs": 4, 00:26:16.068 "num_base_bdevs_discovered": 1, 00:26:16.068 "num_base_bdevs_operational": 3, 00:26:16.068 "base_bdevs_list": [ 00:26:16.068 { 00:26:16.068 "name": null, 00:26:16.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.068 "is_configured": false, 00:26:16.068 "data_offset": 2048, 00:26:16.068 "data_size": 63488 00:26:16.068 }, 00:26:16.068 { 00:26:16.068 "name": "pt2", 00:26:16.068 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:16.068 "is_configured": true, 00:26:16.068 "data_offset": 2048, 00:26:16.068 "data_size": 63488 00:26:16.068 }, 00:26:16.068 { 00:26:16.068 "name": null, 00:26:16.068 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:16.068 "is_configured": false, 00:26:16.068 "data_offset": 2048, 00:26:16.068 "data_size": 63488 00:26:16.068 }, 00:26:16.068 { 00:26:16.068 "name": null, 00:26:16.068 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:16.068 "is_configured": false, 00:26:16.068 "data_offset": 2048, 00:26:16.068 "data_size": 63488 00:26:16.068 } 00:26:16.068 ] 00:26:16.068 }' 00:26:16.068 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:16.068 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.325 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:26:16.325 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:26:16.325 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:16.325 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.325 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.325 [2024-12-05 12:57:58.811737] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:16.325 [2024-12-05 12:57:58.811796] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:16.325 [2024-12-05 12:57:58.811812] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:26:16.325 [2024-12-05 12:57:58.811820] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:16.325 [2024-12-05 12:57:58.812144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:16.325 [2024-12-05 12:57:58.812155] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:16.325 [2024-12-05 12:57:58.812212] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:26:16.325 [2024-12-05 12:57:58.812228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:16.325 pt3 00:26:16.325 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.325 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:26:16.325 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:16.325 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:16.325 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:16.325 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:16.325 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:16.325 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:16.325 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:16.325 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:16.325 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:16.325 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:16.325 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:16.325 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.325 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.325 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.325 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:16.325 "name": "raid_bdev1", 00:26:16.325 "uuid": "96508a19-4022-4959-b52c-1167523e2c92", 00:26:16.325 "strip_size_kb": 64, 00:26:16.325 "state": "configuring", 00:26:16.326 "raid_level": "raid5f", 00:26:16.326 "superblock": true, 00:26:16.326 "num_base_bdevs": 4, 00:26:16.326 "num_base_bdevs_discovered": 2, 00:26:16.326 "num_base_bdevs_operational": 3, 00:26:16.326 "base_bdevs_list": [ 00:26:16.326 { 00:26:16.326 "name": null, 00:26:16.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.326 "is_configured": false, 00:26:16.326 "data_offset": 2048, 00:26:16.326 "data_size": 63488 00:26:16.326 }, 00:26:16.326 { 00:26:16.326 "name": "pt2", 00:26:16.326 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:16.326 "is_configured": true, 00:26:16.326 "data_offset": 2048, 00:26:16.326 "data_size": 63488 00:26:16.326 }, 00:26:16.326 { 00:26:16.326 "name": "pt3", 00:26:16.326 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:16.326 "is_configured": true, 00:26:16.326 "data_offset": 2048, 00:26:16.326 "data_size": 63488 00:26:16.326 }, 00:26:16.326 { 00:26:16.326 "name": null, 00:26:16.326 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:16.326 "is_configured": false, 00:26:16.326 "data_offset": 2048, 00:26:16.326 "data_size": 63488 00:26:16.326 } 00:26:16.326 ] 00:26:16.326 }' 00:26:16.326 12:57:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:16.326 12:57:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.584 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:26:16.584 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:26:16.584 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:26:16.584 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:16.584 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.584 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.584 [2024-12-05 12:57:59.119808] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:16.584 [2024-12-05 12:57:59.119858] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:16.584 [2024-12-05 12:57:59.119873] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:26:16.584 [2024-12-05 12:57:59.119879] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:16.584 [2024-12-05 12:57:59.120210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:16.584 [2024-12-05 12:57:59.120220] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:16.584 [2024-12-05 12:57:59.120279] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:26:16.584 [2024-12-05 12:57:59.120297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:16.584 [2024-12-05 12:57:59.120410] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:26:16.584 [2024-12-05 12:57:59.120418] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:16.584 [2024-12-05 12:57:59.120626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:26:16.584 [2024-12-05 12:57:59.124277] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:26:16.584 [2024-12-05 12:57:59.124297] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:26:16.584 [2024-12-05 12:57:59.124528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:16.584 pt4 00:26:16.584 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.584 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:16.584 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:16.585 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:16.585 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:16.585 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:16.585 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:16.585 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:16.585 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:16.585 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:16.585 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:16.585 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:16.585 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:16.585 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.585 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.585 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.585 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:16.585 "name": "raid_bdev1", 00:26:16.585 "uuid": "96508a19-4022-4959-b52c-1167523e2c92", 00:26:16.585 "strip_size_kb": 64, 00:26:16.585 "state": "online", 00:26:16.585 "raid_level": "raid5f", 00:26:16.585 "superblock": true, 00:26:16.585 "num_base_bdevs": 4, 00:26:16.585 "num_base_bdevs_discovered": 3, 00:26:16.585 "num_base_bdevs_operational": 3, 00:26:16.585 "base_bdevs_list": [ 00:26:16.585 { 00:26:16.585 "name": null, 00:26:16.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.585 "is_configured": false, 00:26:16.585 "data_offset": 2048, 00:26:16.585 "data_size": 63488 00:26:16.585 }, 00:26:16.585 { 00:26:16.585 "name": "pt2", 00:26:16.585 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:16.585 "is_configured": true, 00:26:16.585 "data_offset": 2048, 00:26:16.585 "data_size": 63488 00:26:16.585 }, 00:26:16.585 { 00:26:16.585 "name": "pt3", 00:26:16.585 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:16.585 "is_configured": true, 00:26:16.585 "data_offset": 2048, 00:26:16.585 "data_size": 63488 00:26:16.585 }, 00:26:16.585 { 00:26:16.585 "name": "pt4", 00:26:16.585 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:16.585 "is_configured": true, 00:26:16.585 "data_offset": 2048, 00:26:16.585 "data_size": 63488 00:26:16.585 } 00:26:16.585 ] 00:26:16.585 }' 00:26:16.585 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:16.585 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.150 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:17.150 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.150 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.150 [2024-12-05 12:57:59.448747] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:17.150 [2024-12-05 12:57:59.448865] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:17.150 [2024-12-05 12:57:59.448975] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:17.150 [2024-12-05 12:57:59.449139] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:17.150 [2024-12-05 12:57:59.449215] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:26:17.150 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.150 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:17.150 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:26:17.150 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.150 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.150 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.150 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:26:17.150 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:26:17.150 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:26:17.150 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:26:17.150 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:26:17.150 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.150 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.150 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.150 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:17.150 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.150 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.150 [2024-12-05 12:57:59.496746] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:17.150 [2024-12-05 12:57:59.496792] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:17.150 [2024-12-05 12:57:59.496809] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:26:17.150 [2024-12-05 12:57:59.496819] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:17.150 [2024-12-05 12:57:59.498616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:17.150 [2024-12-05 12:57:59.498719] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:17.150 [2024-12-05 12:57:59.498786] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:17.150 [2024-12-05 12:57:59.498822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:17.150 [2024-12-05 12:57:59.498919] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:26:17.150 [2024-12-05 12:57:59.498928] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:17.150 [2024-12-05 12:57:59.498940] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:26:17.150 [2024-12-05 12:57:59.498983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:17.150 [2024-12-05 12:57:59.499062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:17.150 pt1 00:26:17.151 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.151 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:26:17.151 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:26:17.151 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:17.151 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:17.151 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:17.151 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:17.151 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:17.151 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:17.151 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:17.151 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:17.151 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:17.151 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:17.151 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:17.151 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.151 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.151 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.151 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:17.151 "name": "raid_bdev1", 00:26:17.151 "uuid": "96508a19-4022-4959-b52c-1167523e2c92", 00:26:17.151 "strip_size_kb": 64, 00:26:17.151 "state": "configuring", 00:26:17.151 "raid_level": "raid5f", 00:26:17.151 "superblock": true, 00:26:17.151 "num_base_bdevs": 4, 00:26:17.151 "num_base_bdevs_discovered": 2, 00:26:17.151 "num_base_bdevs_operational": 3, 00:26:17.151 "base_bdevs_list": [ 00:26:17.151 { 00:26:17.151 "name": null, 00:26:17.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:17.151 "is_configured": false, 00:26:17.151 "data_offset": 2048, 00:26:17.151 "data_size": 63488 00:26:17.151 }, 00:26:17.151 { 00:26:17.151 "name": "pt2", 00:26:17.151 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:17.151 "is_configured": true, 00:26:17.151 "data_offset": 2048, 00:26:17.151 "data_size": 63488 00:26:17.151 }, 00:26:17.151 { 00:26:17.151 "name": "pt3", 00:26:17.151 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:17.151 "is_configured": true, 00:26:17.151 "data_offset": 2048, 00:26:17.151 "data_size": 63488 00:26:17.151 }, 00:26:17.151 { 00:26:17.151 "name": null, 00:26:17.151 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:17.151 "is_configured": false, 00:26:17.151 "data_offset": 2048, 00:26:17.151 "data_size": 63488 00:26:17.151 } 00:26:17.151 ] 00:26:17.151 }' 00:26:17.151 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:17.151 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.408 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:26:17.408 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.408 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:17.408 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.408 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.408 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:26:17.408 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:17.408 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.408 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.408 [2024-12-05 12:57:59.840841] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:17.408 [2024-12-05 12:57:59.840889] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:17.408 [2024-12-05 12:57:59.840905] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:26:17.408 [2024-12-05 12:57:59.840912] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:17.408 [2024-12-05 12:57:59.841246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:17.408 [2024-12-05 12:57:59.841257] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:17.408 [2024-12-05 12:57:59.841317] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:26:17.408 [2024-12-05 12:57:59.841332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:17.408 [2024-12-05 12:57:59.841429] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:26:17.408 [2024-12-05 12:57:59.841435] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:17.408 [2024-12-05 12:57:59.841636] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:26:17.408 [2024-12-05 12:57:59.845289] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:26:17.408 pt4 00:26:17.408 [2024-12-05 12:57:59.845448] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:26:17.408 [2024-12-05 12:57:59.845671] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:17.408 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.409 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:17.409 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:17.409 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:17.409 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:17.409 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:17.409 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:17.409 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:17.409 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:17.409 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:17.409 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:17.409 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:17.409 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:17.409 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.409 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.409 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.409 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:17.409 "name": "raid_bdev1", 00:26:17.409 "uuid": "96508a19-4022-4959-b52c-1167523e2c92", 00:26:17.409 "strip_size_kb": 64, 00:26:17.409 "state": "online", 00:26:17.409 "raid_level": "raid5f", 00:26:17.409 "superblock": true, 00:26:17.409 "num_base_bdevs": 4, 00:26:17.409 "num_base_bdevs_discovered": 3, 00:26:17.409 "num_base_bdevs_operational": 3, 00:26:17.409 "base_bdevs_list": [ 00:26:17.409 { 00:26:17.409 "name": null, 00:26:17.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:17.409 "is_configured": false, 00:26:17.409 "data_offset": 2048, 00:26:17.409 "data_size": 63488 00:26:17.409 }, 00:26:17.409 { 00:26:17.409 "name": "pt2", 00:26:17.409 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:17.409 "is_configured": true, 00:26:17.409 "data_offset": 2048, 00:26:17.409 "data_size": 63488 00:26:17.409 }, 00:26:17.409 { 00:26:17.409 "name": "pt3", 00:26:17.409 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:17.409 "is_configured": true, 00:26:17.409 "data_offset": 2048, 00:26:17.409 "data_size": 63488 00:26:17.409 }, 00:26:17.409 { 00:26:17.409 "name": "pt4", 00:26:17.409 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:17.409 "is_configured": true, 00:26:17.409 "data_offset": 2048, 00:26:17.409 "data_size": 63488 00:26:17.409 } 00:26:17.409 ] 00:26:17.409 }' 00:26:17.409 12:57:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:17.409 12:57:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.666 12:58:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:26:17.666 12:58:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.666 12:58:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.666 12:58:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:17.666 12:58:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.666 12:58:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:26:17.666 12:58:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:26:17.666 12:58:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:17.666 12:58:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.666 12:58:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.666 [2024-12-05 12:58:00.209992] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:17.666 12:58:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.666 12:58:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 96508a19-4022-4959-b52c-1167523e2c92 '!=' 96508a19-4022-4959-b52c-1167523e2c92 ']' 00:26:17.666 12:58:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81464 00:26:17.666 12:58:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81464 ']' 00:26:17.666 12:58:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81464 00:26:17.666 12:58:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:26:17.666 12:58:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:17.666 12:58:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81464 00:26:17.923 killing process with pid 81464 00:26:17.923 12:58:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:17.923 12:58:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:17.923 12:58:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81464' 00:26:17.923 12:58:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81464 00:26:17.923 [2024-12-05 12:58:00.256893] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:17.923 12:58:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81464 00:26:17.923 [2024-12-05 12:58:00.256963] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:17.923 [2024-12-05 12:58:00.257023] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:17.923 [2024-12-05 12:58:00.257035] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:26:17.923 [2024-12-05 12:58:00.451179] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:18.580 12:58:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:26:18.580 00:26:18.580 real 0m5.973s 00:26:18.580 user 0m9.505s 00:26:18.580 sys 0m1.017s 00:26:18.580 12:58:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:18.580 12:58:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.580 ************************************ 00:26:18.580 END TEST raid5f_superblock_test 00:26:18.580 ************************************ 00:26:18.580 12:58:01 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:26:18.580 12:58:01 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:26:18.580 12:58:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:26:18.580 12:58:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:18.580 12:58:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:18.580 ************************************ 00:26:18.580 START TEST raid5f_rebuild_test 00:26:18.580 ************************************ 00:26:18.580 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:26:18.580 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:26:18.580 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:26:18.580 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:26:18.580 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:26:18.580 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:26:18.580 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:26:18.580 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:18.580 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:26:18.580 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:26:18.580 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:18.580 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:26:18.580 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:26:18.580 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:18.580 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:26:18.580 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:26:18.580 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:18.580 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:26:18.580 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:26:18.581 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:18.581 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:18.581 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:26:18.581 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:26:18.581 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:26:18.581 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:26:18.581 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:26:18.581 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:26:18.581 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:26:18.581 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:26:18.581 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:26:18.581 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:26:18.581 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:26:18.581 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81927 00:26:18.581 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81927 00:26:18.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:18.581 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81927 ']' 00:26:18.581 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:18.581 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:18.581 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:18.581 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:18.581 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.581 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:18.581 [2024-12-05 12:58:01.131474] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:26:18.581 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:18.581 Zero copy mechanism will not be used. 00:26:18.581 [2024-12-05 12:58:01.131624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81927 ] 00:26:18.836 [2024-12-05 12:58:01.286925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.836 [2024-12-05 12:58:01.374472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.093 [2024-12-05 12:58:01.485765] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:19.093 [2024-12-05 12:58:01.485907] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:19.658 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:19.658 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:26:19.658 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:26:19.658 12:58:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:19.658 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.658 12:58:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.658 BaseBdev1_malloc 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.658 [2024-12-05 12:58:02.016965] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:19.658 [2024-12-05 12:58:02.017134] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:19.658 [2024-12-05 12:58:02.017156] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:26:19.658 [2024-12-05 12:58:02.017165] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:19.658 [2024-12-05 12:58:02.018895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:19.658 [2024-12-05 12:58:02.018925] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:19.658 BaseBdev1 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.658 BaseBdev2_malloc 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.658 [2024-12-05 12:58:02.048427] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:19.658 [2024-12-05 12:58:02.048614] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:19.658 [2024-12-05 12:58:02.048635] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:26:19.658 [2024-12-05 12:58:02.048644] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:19.658 [2024-12-05 12:58:02.050332] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:19.658 [2024-12-05 12:58:02.050358] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:19.658 BaseBdev2 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.658 BaseBdev3_malloc 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.658 [2024-12-05 12:58:02.093949] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:26:19.658 [2024-12-05 12:58:02.093992] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:19.658 [2024-12-05 12:58:02.094008] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:26:19.658 [2024-12-05 12:58:02.094016] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:19.658 [2024-12-05 12:58:02.095691] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:19.658 [2024-12-05 12:58:02.095848] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:19.658 BaseBdev3 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.658 BaseBdev4_malloc 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.658 [2024-12-05 12:58:02.129418] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:26:19.658 [2024-12-05 12:58:02.129461] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:19.658 [2024-12-05 12:58:02.129474] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:26:19.658 [2024-12-05 12:58:02.129482] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:19.658 [2024-12-05 12:58:02.131166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:19.658 [2024-12-05 12:58:02.131281] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:19.658 BaseBdev4 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.658 spare_malloc 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.658 spare_delay 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.658 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.658 [2024-12-05 12:58:02.169231] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:19.658 [2024-12-05 12:58:02.169273] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:19.658 [2024-12-05 12:58:02.169286] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:26:19.658 [2024-12-05 12:58:02.169294] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:19.659 [2024-12-05 12:58:02.171010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:19.659 [2024-12-05 12:58:02.171043] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:19.659 spare 00:26:19.659 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.659 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:26:19.659 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.659 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.659 [2024-12-05 12:58:02.177275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:19.659 [2024-12-05 12:58:02.178874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:19.659 [2024-12-05 12:58:02.178986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:19.659 [2024-12-05 12:58:02.179084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:19.659 [2024-12-05 12:58:02.179176] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:26:19.659 [2024-12-05 12:58:02.179201] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:26:19.659 [2024-12-05 12:58:02.179505] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:26:19.659 [2024-12-05 12:58:02.183467] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:26:19.659 [2024-12-05 12:58:02.183559] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:26:19.659 [2024-12-05 12:58:02.184525] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:19.659 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.659 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:19.659 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:19.659 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:19.659 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:19.659 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:19.659 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:19.659 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:19.659 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:19.659 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:19.659 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:19.659 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:19.659 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.659 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.659 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:19.659 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.659 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:19.659 "name": "raid_bdev1", 00:26:19.659 "uuid": "a3b1b10e-fccf-45b7-974b-bb88572cd7ce", 00:26:19.659 "strip_size_kb": 64, 00:26:19.659 "state": "online", 00:26:19.659 "raid_level": "raid5f", 00:26:19.659 "superblock": false, 00:26:19.659 "num_base_bdevs": 4, 00:26:19.659 "num_base_bdevs_discovered": 4, 00:26:19.659 "num_base_bdevs_operational": 4, 00:26:19.659 "base_bdevs_list": [ 00:26:19.659 { 00:26:19.659 "name": "BaseBdev1", 00:26:19.659 "uuid": "9681db2c-ba98-5a63-986e-d67dc12c26fe", 00:26:19.659 "is_configured": true, 00:26:19.659 "data_offset": 0, 00:26:19.659 "data_size": 65536 00:26:19.659 }, 00:26:19.659 { 00:26:19.659 "name": "BaseBdev2", 00:26:19.659 "uuid": "919aff19-a603-540e-ac29-7b6b79107dfc", 00:26:19.659 "is_configured": true, 00:26:19.659 "data_offset": 0, 00:26:19.659 "data_size": 65536 00:26:19.659 }, 00:26:19.659 { 00:26:19.659 "name": "BaseBdev3", 00:26:19.659 "uuid": "56ba36d9-4738-5205-b513-1dfcd3dec0ad", 00:26:19.659 "is_configured": true, 00:26:19.659 "data_offset": 0, 00:26:19.659 "data_size": 65536 00:26:19.659 }, 00:26:19.659 { 00:26:19.659 "name": "BaseBdev4", 00:26:19.659 "uuid": "87681ef1-b6dd-5825-bc9f-ab575a25c689", 00:26:19.659 "is_configured": true, 00:26:19.659 "data_offset": 0, 00:26:19.659 "data_size": 65536 00:26:19.659 } 00:26:19.659 ] 00:26:19.659 }' 00:26:19.659 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:19.659 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.917 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:26:19.917 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:19.917 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.917 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.917 [2024-12-05 12:58:02.498597] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:20.175 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.175 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:26:20.175 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:20.175 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:20.175 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.175 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.175 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.175 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:26:20.175 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:26:20.175 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:26:20.175 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:26:20.175 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:26:20.175 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:26:20.175 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:26:20.175 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:20.175 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:20.175 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:20.175 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:26:20.175 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:20.175 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:20.175 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:20.175 [2024-12-05 12:58:02.738451] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:26:20.175 /dev/nbd0 00:26:20.432 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:20.432 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:20.432 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:20.432 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:26:20.432 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:20.432 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:20.432 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:20.432 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:26:20.432 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:20.432 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:20.432 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:20.432 1+0 records in 00:26:20.432 1+0 records out 00:26:20.432 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211424 s, 19.4 MB/s 00:26:20.432 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:20.432 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:26:20.432 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:20.432 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:20.432 12:58:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:26:20.432 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:20.432 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:20.432 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:26:20.432 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:26:20.432 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:26:20.432 12:58:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:26:20.996 512+0 records in 00:26:20.996 512+0 records out 00:26:20.996 100663296 bytes (101 MB, 96 MiB) copied, 0.495323 s, 203 MB/s 00:26:20.996 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:26:20.996 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:26:20.996 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:20.996 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:20.996 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:26:20.996 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:20.996 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:26:20.996 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:20.996 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:20.996 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:20.996 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:20.996 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:20.996 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:20.996 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:26:20.996 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:26:20.996 [2024-12-05 12:58:03.505508] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:20.996 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:26:20.996 12:58:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.996 12:58:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.996 [2024-12-05 12:58:03.511896] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:20.996 12:58:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.996 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:20.996 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:20.996 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:20.996 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:20.997 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:20.997 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:20.997 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:20.997 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:20.997 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:20.997 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:20.997 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:20.997 12:58:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.997 12:58:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.997 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:20.997 12:58:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.997 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:20.997 "name": "raid_bdev1", 00:26:20.997 "uuid": "a3b1b10e-fccf-45b7-974b-bb88572cd7ce", 00:26:20.997 "strip_size_kb": 64, 00:26:20.997 "state": "online", 00:26:20.997 "raid_level": "raid5f", 00:26:20.997 "superblock": false, 00:26:20.997 "num_base_bdevs": 4, 00:26:20.997 "num_base_bdevs_discovered": 3, 00:26:20.997 "num_base_bdevs_operational": 3, 00:26:20.997 "base_bdevs_list": [ 00:26:20.997 { 00:26:20.997 "name": null, 00:26:20.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:20.997 "is_configured": false, 00:26:20.997 "data_offset": 0, 00:26:20.997 "data_size": 65536 00:26:20.997 }, 00:26:20.997 { 00:26:20.997 "name": "BaseBdev2", 00:26:20.997 "uuid": "919aff19-a603-540e-ac29-7b6b79107dfc", 00:26:20.997 "is_configured": true, 00:26:20.997 "data_offset": 0, 00:26:20.997 "data_size": 65536 00:26:20.997 }, 00:26:20.997 { 00:26:20.997 "name": "BaseBdev3", 00:26:20.997 "uuid": "56ba36d9-4738-5205-b513-1dfcd3dec0ad", 00:26:20.997 "is_configured": true, 00:26:20.997 "data_offset": 0, 00:26:20.997 "data_size": 65536 00:26:20.997 }, 00:26:20.997 { 00:26:20.997 "name": "BaseBdev4", 00:26:20.997 "uuid": "87681ef1-b6dd-5825-bc9f-ab575a25c689", 00:26:20.997 "is_configured": true, 00:26:20.997 "data_offset": 0, 00:26:20.997 "data_size": 65536 00:26:20.997 } 00:26:20.997 ] 00:26:20.997 }' 00:26:20.997 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:20.997 12:58:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.254 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:26:21.254 12:58:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.254 12:58:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.254 [2024-12-05 12:58:03.831954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:21.512 [2024-12-05 12:58:03.842039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:26:21.512 12:58:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.512 12:58:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:26:21.512 [2024-12-05 12:58:03.848825] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:22.446 12:58:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:22.446 12:58:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:22.446 12:58:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:22.446 12:58:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:22.446 12:58:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:22.446 12:58:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:22.446 12:58:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:22.446 12:58:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.446 12:58:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.446 12:58:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.446 12:58:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:22.446 "name": "raid_bdev1", 00:26:22.446 "uuid": "a3b1b10e-fccf-45b7-974b-bb88572cd7ce", 00:26:22.446 "strip_size_kb": 64, 00:26:22.446 "state": "online", 00:26:22.446 "raid_level": "raid5f", 00:26:22.446 "superblock": false, 00:26:22.446 "num_base_bdevs": 4, 00:26:22.446 "num_base_bdevs_discovered": 4, 00:26:22.446 "num_base_bdevs_operational": 4, 00:26:22.446 "process": { 00:26:22.446 "type": "rebuild", 00:26:22.446 "target": "spare", 00:26:22.446 "progress": { 00:26:22.446 "blocks": 17280, 00:26:22.446 "percent": 8 00:26:22.446 } 00:26:22.446 }, 00:26:22.446 "base_bdevs_list": [ 00:26:22.446 { 00:26:22.446 "name": "spare", 00:26:22.446 "uuid": "8f86235f-e801-56ec-82b6-4081eb78faf8", 00:26:22.446 "is_configured": true, 00:26:22.446 "data_offset": 0, 00:26:22.446 "data_size": 65536 00:26:22.446 }, 00:26:22.446 { 00:26:22.446 "name": "BaseBdev2", 00:26:22.446 "uuid": "919aff19-a603-540e-ac29-7b6b79107dfc", 00:26:22.446 "is_configured": true, 00:26:22.446 "data_offset": 0, 00:26:22.446 "data_size": 65536 00:26:22.446 }, 00:26:22.446 { 00:26:22.446 "name": "BaseBdev3", 00:26:22.446 "uuid": "56ba36d9-4738-5205-b513-1dfcd3dec0ad", 00:26:22.446 "is_configured": true, 00:26:22.446 "data_offset": 0, 00:26:22.446 "data_size": 65536 00:26:22.446 }, 00:26:22.446 { 00:26:22.446 "name": "BaseBdev4", 00:26:22.446 "uuid": "87681ef1-b6dd-5825-bc9f-ab575a25c689", 00:26:22.446 "is_configured": true, 00:26:22.446 "data_offset": 0, 00:26:22.446 "data_size": 65536 00:26:22.446 } 00:26:22.446 ] 00:26:22.446 }' 00:26:22.446 12:58:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:22.446 12:58:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:22.446 12:58:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:22.446 12:58:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:22.446 12:58:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:26:22.446 12:58:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.446 12:58:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.446 [2024-12-05 12:58:04.949748] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:22.446 [2024-12-05 12:58:04.957346] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:22.446 [2024-12-05 12:58:04.957407] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:22.446 [2024-12-05 12:58:04.957425] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:22.447 [2024-12-05 12:58:04.957436] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:22.447 12:58:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.447 12:58:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:22.447 12:58:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:22.447 12:58:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:22.447 12:58:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:22.447 12:58:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:22.447 12:58:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:22.447 12:58:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:22.447 12:58:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:22.447 12:58:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:22.447 12:58:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:22.447 12:58:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:22.447 12:58:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:22.447 12:58:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.447 12:58:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.447 12:58:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.447 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:22.447 "name": "raid_bdev1", 00:26:22.447 "uuid": "a3b1b10e-fccf-45b7-974b-bb88572cd7ce", 00:26:22.447 "strip_size_kb": 64, 00:26:22.447 "state": "online", 00:26:22.447 "raid_level": "raid5f", 00:26:22.447 "superblock": false, 00:26:22.447 "num_base_bdevs": 4, 00:26:22.447 "num_base_bdevs_discovered": 3, 00:26:22.447 "num_base_bdevs_operational": 3, 00:26:22.447 "base_bdevs_list": [ 00:26:22.447 { 00:26:22.447 "name": null, 00:26:22.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:22.447 "is_configured": false, 00:26:22.447 "data_offset": 0, 00:26:22.447 "data_size": 65536 00:26:22.447 }, 00:26:22.447 { 00:26:22.447 "name": "BaseBdev2", 00:26:22.447 "uuid": "919aff19-a603-540e-ac29-7b6b79107dfc", 00:26:22.447 "is_configured": true, 00:26:22.447 "data_offset": 0, 00:26:22.447 "data_size": 65536 00:26:22.447 }, 00:26:22.447 { 00:26:22.447 "name": "BaseBdev3", 00:26:22.447 "uuid": "56ba36d9-4738-5205-b513-1dfcd3dec0ad", 00:26:22.447 "is_configured": true, 00:26:22.447 "data_offset": 0, 00:26:22.447 "data_size": 65536 00:26:22.447 }, 00:26:22.447 { 00:26:22.447 "name": "BaseBdev4", 00:26:22.447 "uuid": "87681ef1-b6dd-5825-bc9f-ab575a25c689", 00:26:22.447 "is_configured": true, 00:26:22.447 "data_offset": 0, 00:26:22.447 "data_size": 65536 00:26:22.447 } 00:26:22.447 ] 00:26:22.447 }' 00:26:22.447 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:22.447 12:58:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.704 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:22.704 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:22.704 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:22.704 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:22.704 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:22.704 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:22.704 12:58:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.704 12:58:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.704 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:22.962 12:58:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.962 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:22.962 "name": "raid_bdev1", 00:26:22.962 "uuid": "a3b1b10e-fccf-45b7-974b-bb88572cd7ce", 00:26:22.962 "strip_size_kb": 64, 00:26:22.962 "state": "online", 00:26:22.962 "raid_level": "raid5f", 00:26:22.962 "superblock": false, 00:26:22.962 "num_base_bdevs": 4, 00:26:22.962 "num_base_bdevs_discovered": 3, 00:26:22.962 "num_base_bdevs_operational": 3, 00:26:22.962 "base_bdevs_list": [ 00:26:22.962 { 00:26:22.962 "name": null, 00:26:22.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:22.962 "is_configured": false, 00:26:22.962 "data_offset": 0, 00:26:22.962 "data_size": 65536 00:26:22.962 }, 00:26:22.962 { 00:26:22.962 "name": "BaseBdev2", 00:26:22.962 "uuid": "919aff19-a603-540e-ac29-7b6b79107dfc", 00:26:22.962 "is_configured": true, 00:26:22.962 "data_offset": 0, 00:26:22.962 "data_size": 65536 00:26:22.962 }, 00:26:22.962 { 00:26:22.962 "name": "BaseBdev3", 00:26:22.962 "uuid": "56ba36d9-4738-5205-b513-1dfcd3dec0ad", 00:26:22.962 "is_configured": true, 00:26:22.962 "data_offset": 0, 00:26:22.962 "data_size": 65536 00:26:22.962 }, 00:26:22.962 { 00:26:22.962 "name": "BaseBdev4", 00:26:22.962 "uuid": "87681ef1-b6dd-5825-bc9f-ab575a25c689", 00:26:22.962 "is_configured": true, 00:26:22.962 "data_offset": 0, 00:26:22.962 "data_size": 65536 00:26:22.962 } 00:26:22.962 ] 00:26:22.962 }' 00:26:22.962 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:22.962 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:22.962 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:22.962 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:22.962 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:26:22.962 12:58:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.962 12:58:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.962 [2024-12-05 12:58:05.376594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:22.962 [2024-12-05 12:58:05.386135] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:26:22.962 12:58:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.962 12:58:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:26:22.962 [2024-12-05 12:58:05.392559] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:23.910 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:23.910 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:23.910 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:23.910 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:23.910 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:23.910 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:23.910 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:23.910 12:58:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.910 12:58:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:23.910 12:58:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.910 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:23.910 "name": "raid_bdev1", 00:26:23.910 "uuid": "a3b1b10e-fccf-45b7-974b-bb88572cd7ce", 00:26:23.910 "strip_size_kb": 64, 00:26:23.910 "state": "online", 00:26:23.910 "raid_level": "raid5f", 00:26:23.910 "superblock": false, 00:26:23.910 "num_base_bdevs": 4, 00:26:23.910 "num_base_bdevs_discovered": 4, 00:26:23.910 "num_base_bdevs_operational": 4, 00:26:23.910 "process": { 00:26:23.910 "type": "rebuild", 00:26:23.910 "target": "spare", 00:26:23.910 "progress": { 00:26:23.910 "blocks": 19200, 00:26:23.910 "percent": 9 00:26:23.910 } 00:26:23.910 }, 00:26:23.910 "base_bdevs_list": [ 00:26:23.910 { 00:26:23.910 "name": "spare", 00:26:23.910 "uuid": "8f86235f-e801-56ec-82b6-4081eb78faf8", 00:26:23.910 "is_configured": true, 00:26:23.910 "data_offset": 0, 00:26:23.910 "data_size": 65536 00:26:23.910 }, 00:26:23.910 { 00:26:23.910 "name": "BaseBdev2", 00:26:23.910 "uuid": "919aff19-a603-540e-ac29-7b6b79107dfc", 00:26:23.910 "is_configured": true, 00:26:23.910 "data_offset": 0, 00:26:23.910 "data_size": 65536 00:26:23.910 }, 00:26:23.910 { 00:26:23.910 "name": "BaseBdev3", 00:26:23.910 "uuid": "56ba36d9-4738-5205-b513-1dfcd3dec0ad", 00:26:23.910 "is_configured": true, 00:26:23.910 "data_offset": 0, 00:26:23.910 "data_size": 65536 00:26:23.910 }, 00:26:23.910 { 00:26:23.910 "name": "BaseBdev4", 00:26:23.910 "uuid": "87681ef1-b6dd-5825-bc9f-ab575a25c689", 00:26:23.910 "is_configured": true, 00:26:23.910 "data_offset": 0, 00:26:23.910 "data_size": 65536 00:26:23.910 } 00:26:23.910 ] 00:26:23.910 }' 00:26:23.910 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:23.911 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:23.911 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:23.911 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:23.911 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:26:23.911 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:26:23.911 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:26:23.911 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=479 00:26:23.911 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:23.911 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:23.911 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:23.911 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:23.911 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:23.911 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:23.911 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:23.911 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:23.911 12:58:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.911 12:58:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:24.169 12:58:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.169 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:24.169 "name": "raid_bdev1", 00:26:24.169 "uuid": "a3b1b10e-fccf-45b7-974b-bb88572cd7ce", 00:26:24.169 "strip_size_kb": 64, 00:26:24.169 "state": "online", 00:26:24.169 "raid_level": "raid5f", 00:26:24.169 "superblock": false, 00:26:24.169 "num_base_bdevs": 4, 00:26:24.169 "num_base_bdevs_discovered": 4, 00:26:24.169 "num_base_bdevs_operational": 4, 00:26:24.169 "process": { 00:26:24.169 "type": "rebuild", 00:26:24.169 "target": "spare", 00:26:24.169 "progress": { 00:26:24.169 "blocks": 19200, 00:26:24.169 "percent": 9 00:26:24.169 } 00:26:24.169 }, 00:26:24.169 "base_bdevs_list": [ 00:26:24.169 { 00:26:24.169 "name": "spare", 00:26:24.169 "uuid": "8f86235f-e801-56ec-82b6-4081eb78faf8", 00:26:24.169 "is_configured": true, 00:26:24.169 "data_offset": 0, 00:26:24.169 "data_size": 65536 00:26:24.169 }, 00:26:24.169 { 00:26:24.169 "name": "BaseBdev2", 00:26:24.169 "uuid": "919aff19-a603-540e-ac29-7b6b79107dfc", 00:26:24.169 "is_configured": true, 00:26:24.169 "data_offset": 0, 00:26:24.169 "data_size": 65536 00:26:24.169 }, 00:26:24.169 { 00:26:24.169 "name": "BaseBdev3", 00:26:24.169 "uuid": "56ba36d9-4738-5205-b513-1dfcd3dec0ad", 00:26:24.169 "is_configured": true, 00:26:24.169 "data_offset": 0, 00:26:24.169 "data_size": 65536 00:26:24.169 }, 00:26:24.169 { 00:26:24.169 "name": "BaseBdev4", 00:26:24.169 "uuid": "87681ef1-b6dd-5825-bc9f-ab575a25c689", 00:26:24.169 "is_configured": true, 00:26:24.169 "data_offset": 0, 00:26:24.169 "data_size": 65536 00:26:24.169 } 00:26:24.169 ] 00:26:24.169 }' 00:26:24.169 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:24.169 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:24.169 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:24.169 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:24.169 12:58:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:25.144 12:58:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:25.144 12:58:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:25.144 12:58:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:25.144 12:58:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:25.144 12:58:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:25.144 12:58:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:25.144 12:58:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:25.144 12:58:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:25.144 12:58:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.144 12:58:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.144 12:58:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.144 12:58:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:25.144 "name": "raid_bdev1", 00:26:25.144 "uuid": "a3b1b10e-fccf-45b7-974b-bb88572cd7ce", 00:26:25.144 "strip_size_kb": 64, 00:26:25.144 "state": "online", 00:26:25.144 "raid_level": "raid5f", 00:26:25.144 "superblock": false, 00:26:25.144 "num_base_bdevs": 4, 00:26:25.144 "num_base_bdevs_discovered": 4, 00:26:25.144 "num_base_bdevs_operational": 4, 00:26:25.144 "process": { 00:26:25.144 "type": "rebuild", 00:26:25.144 "target": "spare", 00:26:25.144 "progress": { 00:26:25.144 "blocks": 40320, 00:26:25.144 "percent": 20 00:26:25.144 } 00:26:25.144 }, 00:26:25.144 "base_bdevs_list": [ 00:26:25.144 { 00:26:25.144 "name": "spare", 00:26:25.144 "uuid": "8f86235f-e801-56ec-82b6-4081eb78faf8", 00:26:25.144 "is_configured": true, 00:26:25.144 "data_offset": 0, 00:26:25.144 "data_size": 65536 00:26:25.144 }, 00:26:25.144 { 00:26:25.144 "name": "BaseBdev2", 00:26:25.144 "uuid": "919aff19-a603-540e-ac29-7b6b79107dfc", 00:26:25.144 "is_configured": true, 00:26:25.144 "data_offset": 0, 00:26:25.144 "data_size": 65536 00:26:25.144 }, 00:26:25.144 { 00:26:25.144 "name": "BaseBdev3", 00:26:25.144 "uuid": "56ba36d9-4738-5205-b513-1dfcd3dec0ad", 00:26:25.144 "is_configured": true, 00:26:25.144 "data_offset": 0, 00:26:25.144 "data_size": 65536 00:26:25.144 }, 00:26:25.144 { 00:26:25.144 "name": "BaseBdev4", 00:26:25.144 "uuid": "87681ef1-b6dd-5825-bc9f-ab575a25c689", 00:26:25.144 "is_configured": true, 00:26:25.144 "data_offset": 0, 00:26:25.144 "data_size": 65536 00:26:25.144 } 00:26:25.144 ] 00:26:25.144 }' 00:26:25.145 12:58:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:25.145 12:58:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:25.145 12:58:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:25.145 12:58:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:25.145 12:58:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:26.518 12:58:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:26.518 12:58:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:26.518 12:58:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:26.518 12:58:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:26.518 12:58:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:26.518 12:58:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:26.518 12:58:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:26.518 12:58:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.518 12:58:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.518 12:58:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:26.518 12:58:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.518 12:58:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:26.518 "name": "raid_bdev1", 00:26:26.518 "uuid": "a3b1b10e-fccf-45b7-974b-bb88572cd7ce", 00:26:26.518 "strip_size_kb": 64, 00:26:26.518 "state": "online", 00:26:26.518 "raid_level": "raid5f", 00:26:26.518 "superblock": false, 00:26:26.518 "num_base_bdevs": 4, 00:26:26.518 "num_base_bdevs_discovered": 4, 00:26:26.518 "num_base_bdevs_operational": 4, 00:26:26.518 "process": { 00:26:26.518 "type": "rebuild", 00:26:26.518 "target": "spare", 00:26:26.518 "progress": { 00:26:26.518 "blocks": 61440, 00:26:26.518 "percent": 31 00:26:26.518 } 00:26:26.518 }, 00:26:26.518 "base_bdevs_list": [ 00:26:26.518 { 00:26:26.518 "name": "spare", 00:26:26.518 "uuid": "8f86235f-e801-56ec-82b6-4081eb78faf8", 00:26:26.518 "is_configured": true, 00:26:26.518 "data_offset": 0, 00:26:26.518 "data_size": 65536 00:26:26.518 }, 00:26:26.518 { 00:26:26.518 "name": "BaseBdev2", 00:26:26.518 "uuid": "919aff19-a603-540e-ac29-7b6b79107dfc", 00:26:26.518 "is_configured": true, 00:26:26.518 "data_offset": 0, 00:26:26.518 "data_size": 65536 00:26:26.518 }, 00:26:26.518 { 00:26:26.518 "name": "BaseBdev3", 00:26:26.518 "uuid": "56ba36d9-4738-5205-b513-1dfcd3dec0ad", 00:26:26.518 "is_configured": true, 00:26:26.518 "data_offset": 0, 00:26:26.518 "data_size": 65536 00:26:26.518 }, 00:26:26.518 { 00:26:26.518 "name": "BaseBdev4", 00:26:26.518 "uuid": "87681ef1-b6dd-5825-bc9f-ab575a25c689", 00:26:26.518 "is_configured": true, 00:26:26.518 "data_offset": 0, 00:26:26.518 "data_size": 65536 00:26:26.518 } 00:26:26.518 ] 00:26:26.518 }' 00:26:26.518 12:58:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:26.518 12:58:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:26.518 12:58:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:26.518 12:58:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:26.518 12:58:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:27.451 12:58:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:27.451 12:58:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:27.451 12:58:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:27.451 12:58:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:27.451 12:58:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:27.451 12:58:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:27.451 12:58:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:27.451 12:58:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.451 12:58:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:27.451 12:58:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:27.451 12:58:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.451 12:58:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:27.451 "name": "raid_bdev1", 00:26:27.451 "uuid": "a3b1b10e-fccf-45b7-974b-bb88572cd7ce", 00:26:27.451 "strip_size_kb": 64, 00:26:27.451 "state": "online", 00:26:27.451 "raid_level": "raid5f", 00:26:27.451 "superblock": false, 00:26:27.451 "num_base_bdevs": 4, 00:26:27.451 "num_base_bdevs_discovered": 4, 00:26:27.451 "num_base_bdevs_operational": 4, 00:26:27.451 "process": { 00:26:27.451 "type": "rebuild", 00:26:27.451 "target": "spare", 00:26:27.451 "progress": { 00:26:27.451 "blocks": 82560, 00:26:27.451 "percent": 41 00:26:27.451 } 00:26:27.451 }, 00:26:27.451 "base_bdevs_list": [ 00:26:27.451 { 00:26:27.451 "name": "spare", 00:26:27.451 "uuid": "8f86235f-e801-56ec-82b6-4081eb78faf8", 00:26:27.451 "is_configured": true, 00:26:27.451 "data_offset": 0, 00:26:27.451 "data_size": 65536 00:26:27.451 }, 00:26:27.451 { 00:26:27.451 "name": "BaseBdev2", 00:26:27.451 "uuid": "919aff19-a603-540e-ac29-7b6b79107dfc", 00:26:27.451 "is_configured": true, 00:26:27.451 "data_offset": 0, 00:26:27.451 "data_size": 65536 00:26:27.451 }, 00:26:27.451 { 00:26:27.451 "name": "BaseBdev3", 00:26:27.451 "uuid": "56ba36d9-4738-5205-b513-1dfcd3dec0ad", 00:26:27.451 "is_configured": true, 00:26:27.451 "data_offset": 0, 00:26:27.451 "data_size": 65536 00:26:27.451 }, 00:26:27.451 { 00:26:27.451 "name": "BaseBdev4", 00:26:27.451 "uuid": "87681ef1-b6dd-5825-bc9f-ab575a25c689", 00:26:27.451 "is_configured": true, 00:26:27.451 "data_offset": 0, 00:26:27.451 "data_size": 65536 00:26:27.451 } 00:26:27.451 ] 00:26:27.451 }' 00:26:27.451 12:58:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:27.451 12:58:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:27.451 12:58:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:27.451 12:58:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:27.451 12:58:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:28.387 12:58:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:28.387 12:58:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:28.387 12:58:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:28.387 12:58:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:28.387 12:58:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:28.387 12:58:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:28.387 12:58:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:28.387 12:58:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:28.387 12:58:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.387 12:58:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:28.387 12:58:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.387 12:58:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:28.387 "name": "raid_bdev1", 00:26:28.387 "uuid": "a3b1b10e-fccf-45b7-974b-bb88572cd7ce", 00:26:28.387 "strip_size_kb": 64, 00:26:28.387 "state": "online", 00:26:28.387 "raid_level": "raid5f", 00:26:28.387 "superblock": false, 00:26:28.387 "num_base_bdevs": 4, 00:26:28.387 "num_base_bdevs_discovered": 4, 00:26:28.387 "num_base_bdevs_operational": 4, 00:26:28.387 "process": { 00:26:28.387 "type": "rebuild", 00:26:28.387 "target": "spare", 00:26:28.387 "progress": { 00:26:28.387 "blocks": 103680, 00:26:28.387 "percent": 52 00:26:28.387 } 00:26:28.387 }, 00:26:28.387 "base_bdevs_list": [ 00:26:28.387 { 00:26:28.387 "name": "spare", 00:26:28.387 "uuid": "8f86235f-e801-56ec-82b6-4081eb78faf8", 00:26:28.387 "is_configured": true, 00:26:28.387 "data_offset": 0, 00:26:28.387 "data_size": 65536 00:26:28.387 }, 00:26:28.387 { 00:26:28.387 "name": "BaseBdev2", 00:26:28.387 "uuid": "919aff19-a603-540e-ac29-7b6b79107dfc", 00:26:28.387 "is_configured": true, 00:26:28.387 "data_offset": 0, 00:26:28.387 "data_size": 65536 00:26:28.387 }, 00:26:28.387 { 00:26:28.387 "name": "BaseBdev3", 00:26:28.387 "uuid": "56ba36d9-4738-5205-b513-1dfcd3dec0ad", 00:26:28.387 "is_configured": true, 00:26:28.387 "data_offset": 0, 00:26:28.387 "data_size": 65536 00:26:28.387 }, 00:26:28.387 { 00:26:28.387 "name": "BaseBdev4", 00:26:28.387 "uuid": "87681ef1-b6dd-5825-bc9f-ab575a25c689", 00:26:28.387 "is_configured": true, 00:26:28.387 "data_offset": 0, 00:26:28.387 "data_size": 65536 00:26:28.387 } 00:26:28.387 ] 00:26:28.387 }' 00:26:28.387 12:58:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:28.387 12:58:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:28.387 12:58:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:28.646 12:58:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:28.646 12:58:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:29.581 12:58:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:29.581 12:58:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:29.581 12:58:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:29.581 12:58:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:29.581 12:58:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:29.581 12:58:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:29.581 12:58:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:29.581 12:58:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:29.581 12:58:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.581 12:58:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:29.581 12:58:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.581 12:58:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:29.581 "name": "raid_bdev1", 00:26:29.581 "uuid": "a3b1b10e-fccf-45b7-974b-bb88572cd7ce", 00:26:29.581 "strip_size_kb": 64, 00:26:29.581 "state": "online", 00:26:29.581 "raid_level": "raid5f", 00:26:29.581 "superblock": false, 00:26:29.581 "num_base_bdevs": 4, 00:26:29.581 "num_base_bdevs_discovered": 4, 00:26:29.581 "num_base_bdevs_operational": 4, 00:26:29.581 "process": { 00:26:29.581 "type": "rebuild", 00:26:29.581 "target": "spare", 00:26:29.581 "progress": { 00:26:29.581 "blocks": 124800, 00:26:29.581 "percent": 63 00:26:29.581 } 00:26:29.581 }, 00:26:29.581 "base_bdevs_list": [ 00:26:29.581 { 00:26:29.581 "name": "spare", 00:26:29.581 "uuid": "8f86235f-e801-56ec-82b6-4081eb78faf8", 00:26:29.581 "is_configured": true, 00:26:29.582 "data_offset": 0, 00:26:29.582 "data_size": 65536 00:26:29.582 }, 00:26:29.582 { 00:26:29.582 "name": "BaseBdev2", 00:26:29.582 "uuid": "919aff19-a603-540e-ac29-7b6b79107dfc", 00:26:29.582 "is_configured": true, 00:26:29.582 "data_offset": 0, 00:26:29.582 "data_size": 65536 00:26:29.582 }, 00:26:29.582 { 00:26:29.582 "name": "BaseBdev3", 00:26:29.582 "uuid": "56ba36d9-4738-5205-b513-1dfcd3dec0ad", 00:26:29.582 "is_configured": true, 00:26:29.582 "data_offset": 0, 00:26:29.582 "data_size": 65536 00:26:29.582 }, 00:26:29.582 { 00:26:29.582 "name": "BaseBdev4", 00:26:29.582 "uuid": "87681ef1-b6dd-5825-bc9f-ab575a25c689", 00:26:29.582 "is_configured": true, 00:26:29.582 "data_offset": 0, 00:26:29.582 "data_size": 65536 00:26:29.582 } 00:26:29.582 ] 00:26:29.582 }' 00:26:29.582 12:58:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:29.582 12:58:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:29.582 12:58:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:29.582 12:58:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:29.582 12:58:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:30.529 12:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:30.529 12:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:30.529 12:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:30.529 12:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:30.529 12:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:30.529 12:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:30.529 12:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:30.529 12:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:30.529 12:58:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.529 12:58:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.529 12:58:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.786 12:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:30.786 "name": "raid_bdev1", 00:26:30.786 "uuid": "a3b1b10e-fccf-45b7-974b-bb88572cd7ce", 00:26:30.786 "strip_size_kb": 64, 00:26:30.786 "state": "online", 00:26:30.786 "raid_level": "raid5f", 00:26:30.786 "superblock": false, 00:26:30.786 "num_base_bdevs": 4, 00:26:30.786 "num_base_bdevs_discovered": 4, 00:26:30.786 "num_base_bdevs_operational": 4, 00:26:30.786 "process": { 00:26:30.786 "type": "rebuild", 00:26:30.786 "target": "spare", 00:26:30.786 "progress": { 00:26:30.786 "blocks": 145920, 00:26:30.787 "percent": 74 00:26:30.787 } 00:26:30.787 }, 00:26:30.787 "base_bdevs_list": [ 00:26:30.787 { 00:26:30.787 "name": "spare", 00:26:30.787 "uuid": "8f86235f-e801-56ec-82b6-4081eb78faf8", 00:26:30.787 "is_configured": true, 00:26:30.787 "data_offset": 0, 00:26:30.787 "data_size": 65536 00:26:30.787 }, 00:26:30.787 { 00:26:30.787 "name": "BaseBdev2", 00:26:30.787 "uuid": "919aff19-a603-540e-ac29-7b6b79107dfc", 00:26:30.787 "is_configured": true, 00:26:30.787 "data_offset": 0, 00:26:30.787 "data_size": 65536 00:26:30.787 }, 00:26:30.787 { 00:26:30.787 "name": "BaseBdev3", 00:26:30.787 "uuid": "56ba36d9-4738-5205-b513-1dfcd3dec0ad", 00:26:30.787 "is_configured": true, 00:26:30.787 "data_offset": 0, 00:26:30.787 "data_size": 65536 00:26:30.787 }, 00:26:30.787 { 00:26:30.787 "name": "BaseBdev4", 00:26:30.787 "uuid": "87681ef1-b6dd-5825-bc9f-ab575a25c689", 00:26:30.787 "is_configured": true, 00:26:30.787 "data_offset": 0, 00:26:30.787 "data_size": 65536 00:26:30.787 } 00:26:30.787 ] 00:26:30.787 }' 00:26:30.787 12:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:30.787 12:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:30.787 12:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:30.787 12:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:30.787 12:58:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:31.720 12:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:31.720 12:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:31.720 12:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:31.720 12:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:31.720 12:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:31.720 12:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:31.720 12:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:31.720 12:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:31.720 12:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.720 12:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:31.720 12:58:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.720 12:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:31.720 "name": "raid_bdev1", 00:26:31.720 "uuid": "a3b1b10e-fccf-45b7-974b-bb88572cd7ce", 00:26:31.720 "strip_size_kb": 64, 00:26:31.720 "state": "online", 00:26:31.720 "raid_level": "raid5f", 00:26:31.720 "superblock": false, 00:26:31.720 "num_base_bdevs": 4, 00:26:31.720 "num_base_bdevs_discovered": 4, 00:26:31.720 "num_base_bdevs_operational": 4, 00:26:31.720 "process": { 00:26:31.720 "type": "rebuild", 00:26:31.720 "target": "spare", 00:26:31.720 "progress": { 00:26:31.720 "blocks": 167040, 00:26:31.720 "percent": 84 00:26:31.720 } 00:26:31.720 }, 00:26:31.720 "base_bdevs_list": [ 00:26:31.720 { 00:26:31.720 "name": "spare", 00:26:31.720 "uuid": "8f86235f-e801-56ec-82b6-4081eb78faf8", 00:26:31.720 "is_configured": true, 00:26:31.720 "data_offset": 0, 00:26:31.720 "data_size": 65536 00:26:31.720 }, 00:26:31.720 { 00:26:31.720 "name": "BaseBdev2", 00:26:31.720 "uuid": "919aff19-a603-540e-ac29-7b6b79107dfc", 00:26:31.720 "is_configured": true, 00:26:31.720 "data_offset": 0, 00:26:31.720 "data_size": 65536 00:26:31.720 }, 00:26:31.720 { 00:26:31.720 "name": "BaseBdev3", 00:26:31.720 "uuid": "56ba36d9-4738-5205-b513-1dfcd3dec0ad", 00:26:31.720 "is_configured": true, 00:26:31.720 "data_offset": 0, 00:26:31.720 "data_size": 65536 00:26:31.720 }, 00:26:31.720 { 00:26:31.720 "name": "BaseBdev4", 00:26:31.720 "uuid": "87681ef1-b6dd-5825-bc9f-ab575a25c689", 00:26:31.720 "is_configured": true, 00:26:31.720 "data_offset": 0, 00:26:31.720 "data_size": 65536 00:26:31.720 } 00:26:31.720 ] 00:26:31.720 }' 00:26:31.720 12:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:31.720 12:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:31.720 12:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:31.720 12:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:31.720 12:58:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:32.743 12:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:32.743 12:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:32.743 12:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:32.743 12:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:32.743 12:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:32.743 12:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:32.743 12:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:32.743 12:58:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.743 12:58:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.743 12:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:32.743 12:58:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.000 12:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:33.000 "name": "raid_bdev1", 00:26:33.000 "uuid": "a3b1b10e-fccf-45b7-974b-bb88572cd7ce", 00:26:33.000 "strip_size_kb": 64, 00:26:33.000 "state": "online", 00:26:33.000 "raid_level": "raid5f", 00:26:33.000 "superblock": false, 00:26:33.000 "num_base_bdevs": 4, 00:26:33.000 "num_base_bdevs_discovered": 4, 00:26:33.000 "num_base_bdevs_operational": 4, 00:26:33.000 "process": { 00:26:33.000 "type": "rebuild", 00:26:33.000 "target": "spare", 00:26:33.000 "progress": { 00:26:33.000 "blocks": 188160, 00:26:33.000 "percent": 95 00:26:33.000 } 00:26:33.000 }, 00:26:33.000 "base_bdevs_list": [ 00:26:33.000 { 00:26:33.000 "name": "spare", 00:26:33.000 "uuid": "8f86235f-e801-56ec-82b6-4081eb78faf8", 00:26:33.000 "is_configured": true, 00:26:33.000 "data_offset": 0, 00:26:33.000 "data_size": 65536 00:26:33.000 }, 00:26:33.000 { 00:26:33.000 "name": "BaseBdev2", 00:26:33.000 "uuid": "919aff19-a603-540e-ac29-7b6b79107dfc", 00:26:33.000 "is_configured": true, 00:26:33.000 "data_offset": 0, 00:26:33.000 "data_size": 65536 00:26:33.000 }, 00:26:33.000 { 00:26:33.000 "name": "BaseBdev3", 00:26:33.000 "uuid": "56ba36d9-4738-5205-b513-1dfcd3dec0ad", 00:26:33.000 "is_configured": true, 00:26:33.000 "data_offset": 0, 00:26:33.000 "data_size": 65536 00:26:33.000 }, 00:26:33.000 { 00:26:33.000 "name": "BaseBdev4", 00:26:33.000 "uuid": "87681ef1-b6dd-5825-bc9f-ab575a25c689", 00:26:33.000 "is_configured": true, 00:26:33.000 "data_offset": 0, 00:26:33.000 "data_size": 65536 00:26:33.000 } 00:26:33.000 ] 00:26:33.000 }' 00:26:33.000 12:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:33.000 12:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:33.000 12:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:33.000 12:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:33.000 12:58:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:33.257 [2024-12-05 12:58:15.762301] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:33.257 [2024-12-05 12:58:15.762365] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:33.257 [2024-12-05 12:58:15.762409] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:33.822 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:33.822 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:33.822 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:33.822 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:33.822 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:33.822 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:33.822 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:33.822 12:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.822 12:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:33.822 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:34.091 12:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.091 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:34.091 "name": "raid_bdev1", 00:26:34.091 "uuid": "a3b1b10e-fccf-45b7-974b-bb88572cd7ce", 00:26:34.091 "strip_size_kb": 64, 00:26:34.091 "state": "online", 00:26:34.091 "raid_level": "raid5f", 00:26:34.091 "superblock": false, 00:26:34.091 "num_base_bdevs": 4, 00:26:34.091 "num_base_bdevs_discovered": 4, 00:26:34.091 "num_base_bdevs_operational": 4, 00:26:34.091 "base_bdevs_list": [ 00:26:34.091 { 00:26:34.091 "name": "spare", 00:26:34.091 "uuid": "8f86235f-e801-56ec-82b6-4081eb78faf8", 00:26:34.091 "is_configured": true, 00:26:34.091 "data_offset": 0, 00:26:34.091 "data_size": 65536 00:26:34.091 }, 00:26:34.091 { 00:26:34.091 "name": "BaseBdev2", 00:26:34.091 "uuid": "919aff19-a603-540e-ac29-7b6b79107dfc", 00:26:34.091 "is_configured": true, 00:26:34.091 "data_offset": 0, 00:26:34.091 "data_size": 65536 00:26:34.091 }, 00:26:34.091 { 00:26:34.091 "name": "BaseBdev3", 00:26:34.091 "uuid": "56ba36d9-4738-5205-b513-1dfcd3dec0ad", 00:26:34.091 "is_configured": true, 00:26:34.091 "data_offset": 0, 00:26:34.091 "data_size": 65536 00:26:34.091 }, 00:26:34.091 { 00:26:34.091 "name": "BaseBdev4", 00:26:34.091 "uuid": "87681ef1-b6dd-5825-bc9f-ab575a25c689", 00:26:34.091 "is_configured": true, 00:26:34.091 "data_offset": 0, 00:26:34.091 "data_size": 65536 00:26:34.091 } 00:26:34.091 ] 00:26:34.091 }' 00:26:34.091 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:34.091 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:34.091 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:34.091 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:26:34.091 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:26:34.091 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:34.091 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:34.091 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:34.091 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:34.091 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:34.091 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:34.091 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:34.091 12:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.091 12:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.091 12:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.091 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:34.091 "name": "raid_bdev1", 00:26:34.091 "uuid": "a3b1b10e-fccf-45b7-974b-bb88572cd7ce", 00:26:34.091 "strip_size_kb": 64, 00:26:34.091 "state": "online", 00:26:34.091 "raid_level": "raid5f", 00:26:34.091 "superblock": false, 00:26:34.091 "num_base_bdevs": 4, 00:26:34.091 "num_base_bdevs_discovered": 4, 00:26:34.091 "num_base_bdevs_operational": 4, 00:26:34.091 "base_bdevs_list": [ 00:26:34.091 { 00:26:34.091 "name": "spare", 00:26:34.092 "uuid": "8f86235f-e801-56ec-82b6-4081eb78faf8", 00:26:34.092 "is_configured": true, 00:26:34.092 "data_offset": 0, 00:26:34.092 "data_size": 65536 00:26:34.092 }, 00:26:34.092 { 00:26:34.092 "name": "BaseBdev2", 00:26:34.092 "uuid": "919aff19-a603-540e-ac29-7b6b79107dfc", 00:26:34.092 "is_configured": true, 00:26:34.092 "data_offset": 0, 00:26:34.092 "data_size": 65536 00:26:34.092 }, 00:26:34.092 { 00:26:34.092 "name": "BaseBdev3", 00:26:34.092 "uuid": "56ba36d9-4738-5205-b513-1dfcd3dec0ad", 00:26:34.092 "is_configured": true, 00:26:34.092 "data_offset": 0, 00:26:34.092 "data_size": 65536 00:26:34.092 }, 00:26:34.092 { 00:26:34.092 "name": "BaseBdev4", 00:26:34.092 "uuid": "87681ef1-b6dd-5825-bc9f-ab575a25c689", 00:26:34.092 "is_configured": true, 00:26:34.092 "data_offset": 0, 00:26:34.092 "data_size": 65536 00:26:34.092 } 00:26:34.092 ] 00:26:34.092 }' 00:26:34.092 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:34.092 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:34.092 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:34.092 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:34.092 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:34.092 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:34.092 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:34.092 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:34.092 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:34.092 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:34.092 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:34.092 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:34.092 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:34.092 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:34.092 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:34.092 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:34.092 12:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.092 12:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.092 12:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.092 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:34.092 "name": "raid_bdev1", 00:26:34.092 "uuid": "a3b1b10e-fccf-45b7-974b-bb88572cd7ce", 00:26:34.092 "strip_size_kb": 64, 00:26:34.092 "state": "online", 00:26:34.092 "raid_level": "raid5f", 00:26:34.092 "superblock": false, 00:26:34.092 "num_base_bdevs": 4, 00:26:34.092 "num_base_bdevs_discovered": 4, 00:26:34.092 "num_base_bdevs_operational": 4, 00:26:34.092 "base_bdevs_list": [ 00:26:34.092 { 00:26:34.092 "name": "spare", 00:26:34.092 "uuid": "8f86235f-e801-56ec-82b6-4081eb78faf8", 00:26:34.092 "is_configured": true, 00:26:34.092 "data_offset": 0, 00:26:34.092 "data_size": 65536 00:26:34.092 }, 00:26:34.092 { 00:26:34.092 "name": "BaseBdev2", 00:26:34.092 "uuid": "919aff19-a603-540e-ac29-7b6b79107dfc", 00:26:34.092 "is_configured": true, 00:26:34.092 "data_offset": 0, 00:26:34.092 "data_size": 65536 00:26:34.092 }, 00:26:34.092 { 00:26:34.092 "name": "BaseBdev3", 00:26:34.092 "uuid": "56ba36d9-4738-5205-b513-1dfcd3dec0ad", 00:26:34.092 "is_configured": true, 00:26:34.092 "data_offset": 0, 00:26:34.092 "data_size": 65536 00:26:34.092 }, 00:26:34.092 { 00:26:34.092 "name": "BaseBdev4", 00:26:34.092 "uuid": "87681ef1-b6dd-5825-bc9f-ab575a25c689", 00:26:34.092 "is_configured": true, 00:26:34.092 "data_offset": 0, 00:26:34.092 "data_size": 65536 00:26:34.092 } 00:26:34.092 ] 00:26:34.092 }' 00:26:34.092 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:34.092 12:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.349 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:34.349 12:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.349 12:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.607 [2024-12-05 12:58:16.934922] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:34.607 [2024-12-05 12:58:16.934950] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:34.607 [2024-12-05 12:58:16.935012] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:34.607 [2024-12-05 12:58:16.935090] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:34.607 [2024-12-05 12:58:16.935098] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:26:34.607 12:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.607 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:34.607 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:26:34.607 12:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.607 12:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.607 12:58:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.607 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:26:34.607 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:26:34.607 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:26:34.607 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:26:34.607 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:26:34.607 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:26:34.607 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:34.607 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:34.607 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:34.607 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:26:34.607 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:34.607 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:34.607 12:58:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:26:34.607 /dev/nbd0 00:26:34.607 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:34.607 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:34.607 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:34.607 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:26:34.607 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:34.607 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:34.607 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:34.865 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:26:34.865 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:34.865 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:34.865 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:34.865 1+0 records in 00:26:34.865 1+0 records out 00:26:34.865 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000130182 s, 31.5 MB/s 00:26:34.865 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:34.865 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:26:34.865 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:34.865 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:34.865 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:26:34.865 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:34.865 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:34.865 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:26:34.865 /dev/nbd1 00:26:34.865 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:34.865 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:34.865 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:26:34.865 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:26:34.865 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:34.865 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:34.865 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:26:34.865 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:26:34.865 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:34.865 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:34.865 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:34.865 1+0 records in 00:26:34.865 1+0 records out 00:26:34.865 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276589 s, 14.8 MB/s 00:26:34.865 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:34.865 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:26:34.865 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:34.865 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:34.865 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:26:34.865 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:34.865 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:34.865 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:26:35.122 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:26:35.122 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:26:35.122 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:35.122 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:35.122 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:26:35.122 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:35.122 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:26:35.379 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:35.379 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:35.379 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:35.379 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:35.379 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:35.379 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:35.379 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:26:35.379 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:26:35.379 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:35.379 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:26:35.638 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:35.638 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:35.638 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:35.638 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:35.638 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:35.638 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:35.638 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:26:35.638 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:26:35.638 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:26:35.638 12:58:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81927 00:26:35.638 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81927 ']' 00:26:35.638 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81927 00:26:35.638 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:26:35.638 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:35.638 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81927 00:26:35.638 killing process with pid 81927 00:26:35.638 Received shutdown signal, test time was about 60.000000 seconds 00:26:35.638 00:26:35.638 Latency(us) 00:26:35.638 [2024-12-05T12:58:18.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:35.638 [2024-12-05T12:58:18.225Z] =================================================================================================================== 00:26:35.638 [2024-12-05T12:58:18.225Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:35.638 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:35.638 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:35.638 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81927' 00:26:35.638 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81927 00:26:35.638 [2024-12-05 12:58:17.993709] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:35.638 12:58:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81927 00:26:35.896 [2024-12-05 12:58:18.234110] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:36.460 12:58:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:26:36.460 00:26:36.460 real 0m17.738s 00:26:36.460 user 0m20.828s 00:26:36.460 sys 0m1.674s 00:26:36.460 ************************************ 00:26:36.460 END TEST raid5f_rebuild_test 00:26:36.460 ************************************ 00:26:36.460 12:58:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:36.460 12:58:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.460 12:58:18 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:26:36.460 12:58:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:26:36.460 12:58:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:36.460 12:58:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:36.460 ************************************ 00:26:36.460 START TEST raid5f_rebuild_test_sb 00:26:36.460 ************************************ 00:26:36.460 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:26:36.460 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:26:36.460 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:26:36.460 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:26:36.460 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:26:36.460 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:26:36.460 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:26:36.460 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:36.460 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:26:36.460 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:26:36.460 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:36.460 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:26:36.460 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:26:36.460 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:36.460 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:26:36.461 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:26:36.461 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:36.461 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:26:36.461 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:26:36.461 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:36.461 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:36.461 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:26:36.461 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:26:36.461 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:26:36.461 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:26:36.461 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:26:36.461 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:26:36.461 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:26:36.461 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:26:36.461 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:26:36.461 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:26:36.461 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:26:36.461 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:26:36.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:36.461 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82427 00:26:36.461 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82427 00:26:36.461 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82427 ']' 00:26:36.461 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:36.461 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:36.461 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:36.461 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:36.461 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:36.461 12:58:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:36.461 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:36.461 Zero copy mechanism will not be used. 00:26:36.461 [2024-12-05 12:58:18.919388] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:26:36.461 [2024-12-05 12:58:18.919546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82427 ] 00:26:36.718 [2024-12-05 12:58:19.071078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.718 [2024-12-05 12:58:19.177653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.975 [2024-12-05 12:58:19.312588] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:36.975 [2024-12-05 12:58:19.312755] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:37.233 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:37.233 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:26:37.233 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:26:37.233 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:37.233 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.234 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:37.234 BaseBdev1_malloc 00:26:37.234 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.234 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:37.234 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.234 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:37.234 [2024-12-05 12:58:19.765957] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:37.234 [2024-12-05 12:58:19.766136] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:37.234 [2024-12-05 12:58:19.766163] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:26:37.234 [2024-12-05 12:58:19.766174] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:37.234 [2024-12-05 12:58:19.768273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:37.234 [2024-12-05 12:58:19.768310] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:37.234 BaseBdev1 00:26:37.234 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.234 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:26:37.234 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:37.234 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.234 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:37.234 BaseBdev2_malloc 00:26:37.234 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.234 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:37.234 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.234 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:37.234 [2024-12-05 12:58:19.805834] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:37.234 [2024-12-05 12:58:19.805883] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:37.234 [2024-12-05 12:58:19.805904] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:26:37.234 [2024-12-05 12:58:19.805914] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:37.234 [2024-12-05 12:58:19.807968] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:37.234 [2024-12-05 12:58:19.808002] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:37.234 BaseBdev2 00:26:37.234 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.234 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:26:37.234 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:37.234 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.234 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:37.492 BaseBdev3_malloc 00:26:37.492 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.492 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:26:37.492 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.492 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:37.492 [2024-12-05 12:58:19.867479] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:26:37.492 [2024-12-05 12:58:19.867550] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:37.492 [2024-12-05 12:58:19.867571] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:26:37.492 [2024-12-05 12:58:19.867582] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:37.492 [2024-12-05 12:58:19.869698] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:37.492 [2024-12-05 12:58:19.869735] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:37.492 BaseBdev3 00:26:37.492 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.492 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:26:37.492 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:37.492 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.492 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:37.492 BaseBdev4_malloc 00:26:37.492 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.492 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:26:37.492 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.492 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:37.492 [2024-12-05 12:58:19.903601] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:26:37.492 [2024-12-05 12:58:19.903653] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:37.492 [2024-12-05 12:58:19.903670] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:26:37.492 [2024-12-05 12:58:19.903679] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:37.492 [2024-12-05 12:58:19.905781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:37.492 [2024-12-05 12:58:19.905818] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:37.492 BaseBdev4 00:26:37.492 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.492 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:26:37.492 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.492 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:37.492 spare_malloc 00:26:37.492 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.492 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:37.492 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.492 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:37.492 spare_delay 00:26:37.492 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.492 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:26:37.493 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.493 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:37.493 [2024-12-05 12:58:19.947759] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:37.493 [2024-12-05 12:58:19.947804] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:37.493 [2024-12-05 12:58:19.947820] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:26:37.493 [2024-12-05 12:58:19.947830] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:37.493 [2024-12-05 12:58:19.949935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:37.493 [2024-12-05 12:58:19.949970] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:37.493 spare 00:26:37.493 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.493 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:26:37.493 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.493 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:37.493 [2024-12-05 12:58:19.955824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:37.493 [2024-12-05 12:58:19.957730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:37.493 [2024-12-05 12:58:19.957790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:37.493 [2024-12-05 12:58:19.957841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:37.493 [2024-12-05 12:58:19.958021] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:26:37.493 [2024-12-05 12:58:19.958033] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:37.493 [2024-12-05 12:58:19.958275] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:26:37.493 [2024-12-05 12:58:19.963203] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:26:37.493 [2024-12-05 12:58:19.963317] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:26:37.493 [2024-12-05 12:58:19.963513] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:37.493 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.493 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:37.493 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:37.493 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:37.493 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:37.493 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:37.493 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:37.493 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:37.493 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:37.493 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:37.493 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:37.493 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:37.493 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:37.493 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.493 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:37.493 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.493 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:37.493 "name": "raid_bdev1", 00:26:37.493 "uuid": "a1bd6568-b6db-46d0-be55-8d3436273678", 00:26:37.493 "strip_size_kb": 64, 00:26:37.493 "state": "online", 00:26:37.493 "raid_level": "raid5f", 00:26:37.493 "superblock": true, 00:26:37.493 "num_base_bdevs": 4, 00:26:37.493 "num_base_bdevs_discovered": 4, 00:26:37.493 "num_base_bdevs_operational": 4, 00:26:37.493 "base_bdevs_list": [ 00:26:37.493 { 00:26:37.493 "name": "BaseBdev1", 00:26:37.493 "uuid": "f8e49c8a-d8b5-5e86-910b-67be0f2b5eab", 00:26:37.493 "is_configured": true, 00:26:37.493 "data_offset": 2048, 00:26:37.493 "data_size": 63488 00:26:37.493 }, 00:26:37.493 { 00:26:37.493 "name": "BaseBdev2", 00:26:37.493 "uuid": "a9418342-ec68-5718-9b22-7c19cf7b8946", 00:26:37.493 "is_configured": true, 00:26:37.493 "data_offset": 2048, 00:26:37.493 "data_size": 63488 00:26:37.493 }, 00:26:37.493 { 00:26:37.493 "name": "BaseBdev3", 00:26:37.493 "uuid": "8b47a3a3-65bd-5337-85fe-115aa3542226", 00:26:37.493 "is_configured": true, 00:26:37.493 "data_offset": 2048, 00:26:37.493 "data_size": 63488 00:26:37.493 }, 00:26:37.493 { 00:26:37.493 "name": "BaseBdev4", 00:26:37.493 "uuid": "b5c20394-dbe6-5dbe-9f94-3be9095c703a", 00:26:37.493 "is_configured": true, 00:26:37.493 "data_offset": 2048, 00:26:37.493 "data_size": 63488 00:26:37.493 } 00:26:37.493 ] 00:26:37.493 }' 00:26:37.493 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:37.493 12:58:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:37.751 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:37.751 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.751 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:26:37.751 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:37.751 [2024-12-05 12:58:20.277199] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:37.751 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.751 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:26:37.751 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:37.751 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:37.751 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.751 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:37.751 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.751 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:26:37.751 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:26:37.751 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:26:37.751 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:26:37.751 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:26:37.751 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:26:37.751 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:26:37.751 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:37.751 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:37.751 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:37.751 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:26:37.751 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:37.751 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:37.751 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:38.008 [2024-12-05 12:58:20.517076] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:26:38.008 /dev/nbd0 00:26:38.008 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:38.008 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:38.008 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:38.008 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:26:38.008 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:38.008 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:38.008 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:38.008 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:26:38.008 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:38.008 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:38.008 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:38.008 1+0 records in 00:26:38.008 1+0 records out 00:26:38.008 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00016082 s, 25.5 MB/s 00:26:38.008 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:38.008 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:26:38.008 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:38.008 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:38.008 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:26:38.008 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:38.008 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:38.008 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:26:38.008 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:26:38.008 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:26:38.008 12:58:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:26:38.572 496+0 records in 00:26:38.572 496+0 records out 00:26:38.572 97517568 bytes (98 MB, 93 MiB) copied, 0.517374 s, 188 MB/s 00:26:38.572 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:26:38.572 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:26:38.572 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:38.572 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:38.572 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:26:38.572 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:38.572 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:26:38.830 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:38.830 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:38.830 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:38.830 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:38.830 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:38.830 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:38.830 [2024-12-05 12:58:21.309563] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:38.830 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:26:38.830 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:26:38.830 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:26:38.830 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.830 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:38.830 [2024-12-05 12:58:21.318921] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:38.830 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.830 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:38.830 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:38.830 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:38.830 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:38.830 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:38.830 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:38.830 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:38.830 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:38.830 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:38.830 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:38.830 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:38.830 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:38.830 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.830 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:38.830 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.830 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:38.830 "name": "raid_bdev1", 00:26:38.830 "uuid": "a1bd6568-b6db-46d0-be55-8d3436273678", 00:26:38.830 "strip_size_kb": 64, 00:26:38.830 "state": "online", 00:26:38.830 "raid_level": "raid5f", 00:26:38.830 "superblock": true, 00:26:38.830 "num_base_bdevs": 4, 00:26:38.830 "num_base_bdevs_discovered": 3, 00:26:38.830 "num_base_bdevs_operational": 3, 00:26:38.830 "base_bdevs_list": [ 00:26:38.830 { 00:26:38.830 "name": null, 00:26:38.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:38.830 "is_configured": false, 00:26:38.830 "data_offset": 0, 00:26:38.830 "data_size": 63488 00:26:38.830 }, 00:26:38.830 { 00:26:38.830 "name": "BaseBdev2", 00:26:38.830 "uuid": "a9418342-ec68-5718-9b22-7c19cf7b8946", 00:26:38.830 "is_configured": true, 00:26:38.830 "data_offset": 2048, 00:26:38.830 "data_size": 63488 00:26:38.830 }, 00:26:38.830 { 00:26:38.830 "name": "BaseBdev3", 00:26:38.830 "uuid": "8b47a3a3-65bd-5337-85fe-115aa3542226", 00:26:38.830 "is_configured": true, 00:26:38.830 "data_offset": 2048, 00:26:38.830 "data_size": 63488 00:26:38.830 }, 00:26:38.830 { 00:26:38.830 "name": "BaseBdev4", 00:26:38.830 "uuid": "b5c20394-dbe6-5dbe-9f94-3be9095c703a", 00:26:38.830 "is_configured": true, 00:26:38.830 "data_offset": 2048, 00:26:38.830 "data_size": 63488 00:26:38.830 } 00:26:38.830 ] 00:26:38.830 }' 00:26:38.830 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:38.830 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:39.087 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:26:39.087 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.087 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:39.087 [2024-12-05 12:58:21.671007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:39.345 [2024-12-05 12:58:21.680979] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:26:39.345 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.345 12:58:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:26:39.345 [2024-12-05 12:58:21.687658] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:40.275 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:40.275 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:40.275 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:40.275 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:40.275 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:40.275 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:40.275 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:40.275 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.275 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:40.275 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.275 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:40.275 "name": "raid_bdev1", 00:26:40.275 "uuid": "a1bd6568-b6db-46d0-be55-8d3436273678", 00:26:40.275 "strip_size_kb": 64, 00:26:40.275 "state": "online", 00:26:40.275 "raid_level": "raid5f", 00:26:40.275 "superblock": true, 00:26:40.275 "num_base_bdevs": 4, 00:26:40.275 "num_base_bdevs_discovered": 4, 00:26:40.275 "num_base_bdevs_operational": 4, 00:26:40.275 "process": { 00:26:40.275 "type": "rebuild", 00:26:40.275 "target": "spare", 00:26:40.275 "progress": { 00:26:40.275 "blocks": 17280, 00:26:40.275 "percent": 9 00:26:40.275 } 00:26:40.275 }, 00:26:40.275 "base_bdevs_list": [ 00:26:40.275 { 00:26:40.275 "name": "spare", 00:26:40.275 "uuid": "784171fb-e490-52ea-ac36-5eac5dbbe902", 00:26:40.275 "is_configured": true, 00:26:40.275 "data_offset": 2048, 00:26:40.275 "data_size": 63488 00:26:40.275 }, 00:26:40.275 { 00:26:40.275 "name": "BaseBdev2", 00:26:40.275 "uuid": "a9418342-ec68-5718-9b22-7c19cf7b8946", 00:26:40.275 "is_configured": true, 00:26:40.275 "data_offset": 2048, 00:26:40.275 "data_size": 63488 00:26:40.275 }, 00:26:40.275 { 00:26:40.275 "name": "BaseBdev3", 00:26:40.275 "uuid": "8b47a3a3-65bd-5337-85fe-115aa3542226", 00:26:40.275 "is_configured": true, 00:26:40.275 "data_offset": 2048, 00:26:40.275 "data_size": 63488 00:26:40.275 }, 00:26:40.275 { 00:26:40.275 "name": "BaseBdev4", 00:26:40.275 "uuid": "b5c20394-dbe6-5dbe-9f94-3be9095c703a", 00:26:40.275 "is_configured": true, 00:26:40.275 "data_offset": 2048, 00:26:40.275 "data_size": 63488 00:26:40.275 } 00:26:40.275 ] 00:26:40.275 }' 00:26:40.275 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:40.275 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:40.275 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:40.275 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:40.275 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:26:40.275 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.275 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:40.275 [2024-12-05 12:58:22.784476] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:40.275 [2024-12-05 12:58:22.796004] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:40.275 [2024-12-05 12:58:22.796069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:40.275 [2024-12-05 12:58:22.796086] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:40.275 [2024-12-05 12:58:22.796095] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:40.275 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.275 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:40.275 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:40.275 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:40.275 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:40.275 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:40.275 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:40.275 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:40.275 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:40.275 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:40.275 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:40.275 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:40.275 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.275 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:40.275 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:40.275 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.275 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:40.275 "name": "raid_bdev1", 00:26:40.275 "uuid": "a1bd6568-b6db-46d0-be55-8d3436273678", 00:26:40.275 "strip_size_kb": 64, 00:26:40.275 "state": "online", 00:26:40.275 "raid_level": "raid5f", 00:26:40.275 "superblock": true, 00:26:40.275 "num_base_bdevs": 4, 00:26:40.275 "num_base_bdevs_discovered": 3, 00:26:40.275 "num_base_bdevs_operational": 3, 00:26:40.275 "base_bdevs_list": [ 00:26:40.275 { 00:26:40.275 "name": null, 00:26:40.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:40.275 "is_configured": false, 00:26:40.275 "data_offset": 0, 00:26:40.275 "data_size": 63488 00:26:40.275 }, 00:26:40.275 { 00:26:40.275 "name": "BaseBdev2", 00:26:40.275 "uuid": "a9418342-ec68-5718-9b22-7c19cf7b8946", 00:26:40.275 "is_configured": true, 00:26:40.275 "data_offset": 2048, 00:26:40.275 "data_size": 63488 00:26:40.275 }, 00:26:40.275 { 00:26:40.275 "name": "BaseBdev3", 00:26:40.275 "uuid": "8b47a3a3-65bd-5337-85fe-115aa3542226", 00:26:40.275 "is_configured": true, 00:26:40.275 "data_offset": 2048, 00:26:40.275 "data_size": 63488 00:26:40.275 }, 00:26:40.275 { 00:26:40.275 "name": "BaseBdev4", 00:26:40.275 "uuid": "b5c20394-dbe6-5dbe-9f94-3be9095c703a", 00:26:40.275 "is_configured": true, 00:26:40.275 "data_offset": 2048, 00:26:40.275 "data_size": 63488 00:26:40.275 } 00:26:40.275 ] 00:26:40.275 }' 00:26:40.275 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:40.275 12:58:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:40.840 12:58:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:40.840 12:58:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:40.840 12:58:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:40.840 12:58:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:40.840 12:58:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:40.840 12:58:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:40.840 12:58:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:40.840 12:58:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.840 12:58:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:40.840 12:58:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.840 12:58:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:40.840 "name": "raid_bdev1", 00:26:40.840 "uuid": "a1bd6568-b6db-46d0-be55-8d3436273678", 00:26:40.840 "strip_size_kb": 64, 00:26:40.840 "state": "online", 00:26:40.840 "raid_level": "raid5f", 00:26:40.840 "superblock": true, 00:26:40.840 "num_base_bdevs": 4, 00:26:40.840 "num_base_bdevs_discovered": 3, 00:26:40.840 "num_base_bdevs_operational": 3, 00:26:40.840 "base_bdevs_list": [ 00:26:40.840 { 00:26:40.840 "name": null, 00:26:40.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:40.840 "is_configured": false, 00:26:40.840 "data_offset": 0, 00:26:40.840 "data_size": 63488 00:26:40.840 }, 00:26:40.840 { 00:26:40.840 "name": "BaseBdev2", 00:26:40.840 "uuid": "a9418342-ec68-5718-9b22-7c19cf7b8946", 00:26:40.840 "is_configured": true, 00:26:40.840 "data_offset": 2048, 00:26:40.840 "data_size": 63488 00:26:40.840 }, 00:26:40.840 { 00:26:40.840 "name": "BaseBdev3", 00:26:40.840 "uuid": "8b47a3a3-65bd-5337-85fe-115aa3542226", 00:26:40.840 "is_configured": true, 00:26:40.840 "data_offset": 2048, 00:26:40.840 "data_size": 63488 00:26:40.840 }, 00:26:40.840 { 00:26:40.840 "name": "BaseBdev4", 00:26:40.840 "uuid": "b5c20394-dbe6-5dbe-9f94-3be9095c703a", 00:26:40.840 "is_configured": true, 00:26:40.840 "data_offset": 2048, 00:26:40.840 "data_size": 63488 00:26:40.840 } 00:26:40.840 ] 00:26:40.840 }' 00:26:40.840 12:58:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:40.840 12:58:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:40.840 12:58:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:40.840 12:58:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:40.840 12:58:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:26:40.840 12:58:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.840 12:58:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:40.840 [2024-12-05 12:58:23.227210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:40.840 [2024-12-05 12:58:23.235161] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:26:40.840 12:58:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.840 12:58:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:26:40.840 [2024-12-05 12:58:23.240293] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:41.771 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:41.771 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:41.771 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:41.771 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:41.771 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:41.771 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:41.771 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:41.771 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.771 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:41.771 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.771 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:41.771 "name": "raid_bdev1", 00:26:41.771 "uuid": "a1bd6568-b6db-46d0-be55-8d3436273678", 00:26:41.771 "strip_size_kb": 64, 00:26:41.771 "state": "online", 00:26:41.771 "raid_level": "raid5f", 00:26:41.771 "superblock": true, 00:26:41.771 "num_base_bdevs": 4, 00:26:41.771 "num_base_bdevs_discovered": 4, 00:26:41.771 "num_base_bdevs_operational": 4, 00:26:41.771 "process": { 00:26:41.771 "type": "rebuild", 00:26:41.771 "target": "spare", 00:26:41.771 "progress": { 00:26:41.771 "blocks": 19200, 00:26:41.771 "percent": 10 00:26:41.771 } 00:26:41.771 }, 00:26:41.771 "base_bdevs_list": [ 00:26:41.771 { 00:26:41.771 "name": "spare", 00:26:41.771 "uuid": "784171fb-e490-52ea-ac36-5eac5dbbe902", 00:26:41.771 "is_configured": true, 00:26:41.771 "data_offset": 2048, 00:26:41.771 "data_size": 63488 00:26:41.771 }, 00:26:41.771 { 00:26:41.771 "name": "BaseBdev2", 00:26:41.771 "uuid": "a9418342-ec68-5718-9b22-7c19cf7b8946", 00:26:41.771 "is_configured": true, 00:26:41.771 "data_offset": 2048, 00:26:41.771 "data_size": 63488 00:26:41.771 }, 00:26:41.771 { 00:26:41.771 "name": "BaseBdev3", 00:26:41.771 "uuid": "8b47a3a3-65bd-5337-85fe-115aa3542226", 00:26:41.771 "is_configured": true, 00:26:41.771 "data_offset": 2048, 00:26:41.771 "data_size": 63488 00:26:41.771 }, 00:26:41.771 { 00:26:41.771 "name": "BaseBdev4", 00:26:41.771 "uuid": "b5c20394-dbe6-5dbe-9f94-3be9095c703a", 00:26:41.771 "is_configured": true, 00:26:41.772 "data_offset": 2048, 00:26:41.772 "data_size": 63488 00:26:41.772 } 00:26:41.772 ] 00:26:41.772 }' 00:26:41.772 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:41.772 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:41.772 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:41.772 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:41.772 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:26:41.772 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:26:41.772 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:26:41.772 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:26:41.772 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:26:41.772 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=497 00:26:41.772 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:41.772 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:41.772 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:41.772 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:41.772 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:41.772 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:41.772 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:41.772 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.772 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:41.772 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:41.772 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.029 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:42.029 "name": "raid_bdev1", 00:26:42.029 "uuid": "a1bd6568-b6db-46d0-be55-8d3436273678", 00:26:42.029 "strip_size_kb": 64, 00:26:42.029 "state": "online", 00:26:42.029 "raid_level": "raid5f", 00:26:42.029 "superblock": true, 00:26:42.029 "num_base_bdevs": 4, 00:26:42.029 "num_base_bdevs_discovered": 4, 00:26:42.029 "num_base_bdevs_operational": 4, 00:26:42.029 "process": { 00:26:42.029 "type": "rebuild", 00:26:42.029 "target": "spare", 00:26:42.029 "progress": { 00:26:42.029 "blocks": 19200, 00:26:42.029 "percent": 10 00:26:42.029 } 00:26:42.029 }, 00:26:42.029 "base_bdevs_list": [ 00:26:42.029 { 00:26:42.029 "name": "spare", 00:26:42.029 "uuid": "784171fb-e490-52ea-ac36-5eac5dbbe902", 00:26:42.029 "is_configured": true, 00:26:42.029 "data_offset": 2048, 00:26:42.029 "data_size": 63488 00:26:42.029 }, 00:26:42.029 { 00:26:42.029 "name": "BaseBdev2", 00:26:42.029 "uuid": "a9418342-ec68-5718-9b22-7c19cf7b8946", 00:26:42.029 "is_configured": true, 00:26:42.029 "data_offset": 2048, 00:26:42.030 "data_size": 63488 00:26:42.030 }, 00:26:42.030 { 00:26:42.030 "name": "BaseBdev3", 00:26:42.030 "uuid": "8b47a3a3-65bd-5337-85fe-115aa3542226", 00:26:42.030 "is_configured": true, 00:26:42.030 "data_offset": 2048, 00:26:42.030 "data_size": 63488 00:26:42.030 }, 00:26:42.030 { 00:26:42.030 "name": "BaseBdev4", 00:26:42.030 "uuid": "b5c20394-dbe6-5dbe-9f94-3be9095c703a", 00:26:42.030 "is_configured": true, 00:26:42.030 "data_offset": 2048, 00:26:42.030 "data_size": 63488 00:26:42.030 } 00:26:42.030 ] 00:26:42.030 }' 00:26:42.030 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:42.030 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:42.030 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:42.030 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:42.030 12:58:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:42.964 12:58:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:42.964 12:58:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:42.964 12:58:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:42.964 12:58:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:42.964 12:58:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:42.964 12:58:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:42.964 12:58:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:42.964 12:58:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:42.964 12:58:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.964 12:58:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:42.964 12:58:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.964 12:58:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:42.964 "name": "raid_bdev1", 00:26:42.964 "uuid": "a1bd6568-b6db-46d0-be55-8d3436273678", 00:26:42.964 "strip_size_kb": 64, 00:26:42.964 "state": "online", 00:26:42.964 "raid_level": "raid5f", 00:26:42.964 "superblock": true, 00:26:42.964 "num_base_bdevs": 4, 00:26:42.964 "num_base_bdevs_discovered": 4, 00:26:42.964 "num_base_bdevs_operational": 4, 00:26:42.964 "process": { 00:26:42.964 "type": "rebuild", 00:26:42.964 "target": "spare", 00:26:42.964 "progress": { 00:26:42.964 "blocks": 40320, 00:26:42.964 "percent": 21 00:26:42.964 } 00:26:42.964 }, 00:26:42.964 "base_bdevs_list": [ 00:26:42.964 { 00:26:42.964 "name": "spare", 00:26:42.964 "uuid": "784171fb-e490-52ea-ac36-5eac5dbbe902", 00:26:42.964 "is_configured": true, 00:26:42.964 "data_offset": 2048, 00:26:42.964 "data_size": 63488 00:26:42.964 }, 00:26:42.964 { 00:26:42.964 "name": "BaseBdev2", 00:26:42.964 "uuid": "a9418342-ec68-5718-9b22-7c19cf7b8946", 00:26:42.964 "is_configured": true, 00:26:42.964 "data_offset": 2048, 00:26:42.964 "data_size": 63488 00:26:42.964 }, 00:26:42.964 { 00:26:42.964 "name": "BaseBdev3", 00:26:42.964 "uuid": "8b47a3a3-65bd-5337-85fe-115aa3542226", 00:26:42.964 "is_configured": true, 00:26:42.964 "data_offset": 2048, 00:26:42.965 "data_size": 63488 00:26:42.965 }, 00:26:42.965 { 00:26:42.965 "name": "BaseBdev4", 00:26:42.965 "uuid": "b5c20394-dbe6-5dbe-9f94-3be9095c703a", 00:26:42.965 "is_configured": true, 00:26:42.965 "data_offset": 2048, 00:26:42.965 "data_size": 63488 00:26:42.965 } 00:26:42.965 ] 00:26:42.965 }' 00:26:42.965 12:58:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:42.965 12:58:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:42.965 12:58:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:42.965 12:58:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:42.965 12:58:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:44.332 12:58:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:44.332 12:58:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:44.332 12:58:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:44.332 12:58:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:44.332 12:58:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:44.332 12:58:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:44.332 12:58:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:44.332 12:58:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:44.332 12:58:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.332 12:58:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:44.332 12:58:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.332 12:58:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:44.332 "name": "raid_bdev1", 00:26:44.332 "uuid": "a1bd6568-b6db-46d0-be55-8d3436273678", 00:26:44.332 "strip_size_kb": 64, 00:26:44.332 "state": "online", 00:26:44.332 "raid_level": "raid5f", 00:26:44.332 "superblock": true, 00:26:44.332 "num_base_bdevs": 4, 00:26:44.332 "num_base_bdevs_discovered": 4, 00:26:44.332 "num_base_bdevs_operational": 4, 00:26:44.332 "process": { 00:26:44.332 "type": "rebuild", 00:26:44.332 "target": "spare", 00:26:44.332 "progress": { 00:26:44.332 "blocks": 61440, 00:26:44.332 "percent": 32 00:26:44.332 } 00:26:44.332 }, 00:26:44.332 "base_bdevs_list": [ 00:26:44.332 { 00:26:44.332 "name": "spare", 00:26:44.332 "uuid": "784171fb-e490-52ea-ac36-5eac5dbbe902", 00:26:44.332 "is_configured": true, 00:26:44.332 "data_offset": 2048, 00:26:44.333 "data_size": 63488 00:26:44.333 }, 00:26:44.333 { 00:26:44.333 "name": "BaseBdev2", 00:26:44.333 "uuid": "a9418342-ec68-5718-9b22-7c19cf7b8946", 00:26:44.333 "is_configured": true, 00:26:44.333 "data_offset": 2048, 00:26:44.333 "data_size": 63488 00:26:44.333 }, 00:26:44.333 { 00:26:44.333 "name": "BaseBdev3", 00:26:44.333 "uuid": "8b47a3a3-65bd-5337-85fe-115aa3542226", 00:26:44.333 "is_configured": true, 00:26:44.333 "data_offset": 2048, 00:26:44.333 "data_size": 63488 00:26:44.333 }, 00:26:44.333 { 00:26:44.333 "name": "BaseBdev4", 00:26:44.333 "uuid": "b5c20394-dbe6-5dbe-9f94-3be9095c703a", 00:26:44.333 "is_configured": true, 00:26:44.333 "data_offset": 2048, 00:26:44.333 "data_size": 63488 00:26:44.333 } 00:26:44.333 ] 00:26:44.333 }' 00:26:44.333 12:58:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:44.333 12:58:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:44.333 12:58:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:44.333 12:58:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:44.333 12:58:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:45.265 12:58:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:45.265 12:58:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:45.265 12:58:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:45.265 12:58:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:45.265 12:58:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:45.265 12:58:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:45.265 12:58:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:45.265 12:58:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.265 12:58:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:45.265 12:58:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:45.265 12:58:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.265 12:58:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:45.265 "name": "raid_bdev1", 00:26:45.265 "uuid": "a1bd6568-b6db-46d0-be55-8d3436273678", 00:26:45.265 "strip_size_kb": 64, 00:26:45.265 "state": "online", 00:26:45.265 "raid_level": "raid5f", 00:26:45.265 "superblock": true, 00:26:45.265 "num_base_bdevs": 4, 00:26:45.265 "num_base_bdevs_discovered": 4, 00:26:45.265 "num_base_bdevs_operational": 4, 00:26:45.265 "process": { 00:26:45.265 "type": "rebuild", 00:26:45.265 "target": "spare", 00:26:45.265 "progress": { 00:26:45.265 "blocks": 82560, 00:26:45.265 "percent": 43 00:26:45.265 } 00:26:45.265 }, 00:26:45.265 "base_bdevs_list": [ 00:26:45.265 { 00:26:45.265 "name": "spare", 00:26:45.265 "uuid": "784171fb-e490-52ea-ac36-5eac5dbbe902", 00:26:45.265 "is_configured": true, 00:26:45.265 "data_offset": 2048, 00:26:45.265 "data_size": 63488 00:26:45.265 }, 00:26:45.265 { 00:26:45.265 "name": "BaseBdev2", 00:26:45.265 "uuid": "a9418342-ec68-5718-9b22-7c19cf7b8946", 00:26:45.265 "is_configured": true, 00:26:45.265 "data_offset": 2048, 00:26:45.265 "data_size": 63488 00:26:45.265 }, 00:26:45.265 { 00:26:45.265 "name": "BaseBdev3", 00:26:45.265 "uuid": "8b47a3a3-65bd-5337-85fe-115aa3542226", 00:26:45.265 "is_configured": true, 00:26:45.265 "data_offset": 2048, 00:26:45.265 "data_size": 63488 00:26:45.265 }, 00:26:45.265 { 00:26:45.265 "name": "BaseBdev4", 00:26:45.265 "uuid": "b5c20394-dbe6-5dbe-9f94-3be9095c703a", 00:26:45.265 "is_configured": true, 00:26:45.265 "data_offset": 2048, 00:26:45.265 "data_size": 63488 00:26:45.265 } 00:26:45.265 ] 00:26:45.265 }' 00:26:45.265 12:58:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:45.265 12:58:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:45.265 12:58:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:45.265 12:58:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:45.265 12:58:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:46.197 12:58:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:46.197 12:58:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:46.197 12:58:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:46.197 12:58:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:46.197 12:58:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:46.197 12:58:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:46.197 12:58:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:46.197 12:58:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:46.197 12:58:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.197 12:58:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:46.197 12:58:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.197 12:58:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:46.197 "name": "raid_bdev1", 00:26:46.197 "uuid": "a1bd6568-b6db-46d0-be55-8d3436273678", 00:26:46.197 "strip_size_kb": 64, 00:26:46.197 "state": "online", 00:26:46.197 "raid_level": "raid5f", 00:26:46.197 "superblock": true, 00:26:46.197 "num_base_bdevs": 4, 00:26:46.197 "num_base_bdevs_discovered": 4, 00:26:46.197 "num_base_bdevs_operational": 4, 00:26:46.197 "process": { 00:26:46.197 "type": "rebuild", 00:26:46.197 "target": "spare", 00:26:46.197 "progress": { 00:26:46.197 "blocks": 103680, 00:26:46.197 "percent": 54 00:26:46.197 } 00:26:46.197 }, 00:26:46.197 "base_bdevs_list": [ 00:26:46.197 { 00:26:46.197 "name": "spare", 00:26:46.197 "uuid": "784171fb-e490-52ea-ac36-5eac5dbbe902", 00:26:46.197 "is_configured": true, 00:26:46.197 "data_offset": 2048, 00:26:46.197 "data_size": 63488 00:26:46.197 }, 00:26:46.197 { 00:26:46.197 "name": "BaseBdev2", 00:26:46.197 "uuid": "a9418342-ec68-5718-9b22-7c19cf7b8946", 00:26:46.197 "is_configured": true, 00:26:46.197 "data_offset": 2048, 00:26:46.197 "data_size": 63488 00:26:46.197 }, 00:26:46.197 { 00:26:46.197 "name": "BaseBdev3", 00:26:46.197 "uuid": "8b47a3a3-65bd-5337-85fe-115aa3542226", 00:26:46.197 "is_configured": true, 00:26:46.197 "data_offset": 2048, 00:26:46.197 "data_size": 63488 00:26:46.197 }, 00:26:46.197 { 00:26:46.197 "name": "BaseBdev4", 00:26:46.197 "uuid": "b5c20394-dbe6-5dbe-9f94-3be9095c703a", 00:26:46.197 "is_configured": true, 00:26:46.197 "data_offset": 2048, 00:26:46.197 "data_size": 63488 00:26:46.197 } 00:26:46.197 ] 00:26:46.197 }' 00:26:46.197 12:58:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:46.455 12:58:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:46.455 12:58:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:46.455 12:58:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:46.455 12:58:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:47.389 12:58:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:47.389 12:58:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:47.389 12:58:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:47.389 12:58:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:47.389 12:58:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:47.389 12:58:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:47.389 12:58:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:47.389 12:58:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:47.389 12:58:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.389 12:58:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:47.389 12:58:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.389 12:58:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:47.389 "name": "raid_bdev1", 00:26:47.389 "uuid": "a1bd6568-b6db-46d0-be55-8d3436273678", 00:26:47.389 "strip_size_kb": 64, 00:26:47.389 "state": "online", 00:26:47.389 "raid_level": "raid5f", 00:26:47.389 "superblock": true, 00:26:47.389 "num_base_bdevs": 4, 00:26:47.389 "num_base_bdevs_discovered": 4, 00:26:47.389 "num_base_bdevs_operational": 4, 00:26:47.389 "process": { 00:26:47.389 "type": "rebuild", 00:26:47.389 "target": "spare", 00:26:47.389 "progress": { 00:26:47.389 "blocks": 124800, 00:26:47.389 "percent": 65 00:26:47.389 } 00:26:47.389 }, 00:26:47.389 "base_bdevs_list": [ 00:26:47.389 { 00:26:47.389 "name": "spare", 00:26:47.389 "uuid": "784171fb-e490-52ea-ac36-5eac5dbbe902", 00:26:47.389 "is_configured": true, 00:26:47.389 "data_offset": 2048, 00:26:47.389 "data_size": 63488 00:26:47.389 }, 00:26:47.389 { 00:26:47.389 "name": "BaseBdev2", 00:26:47.389 "uuid": "a9418342-ec68-5718-9b22-7c19cf7b8946", 00:26:47.389 "is_configured": true, 00:26:47.389 "data_offset": 2048, 00:26:47.389 "data_size": 63488 00:26:47.389 }, 00:26:47.389 { 00:26:47.389 "name": "BaseBdev3", 00:26:47.389 "uuid": "8b47a3a3-65bd-5337-85fe-115aa3542226", 00:26:47.389 "is_configured": true, 00:26:47.389 "data_offset": 2048, 00:26:47.389 "data_size": 63488 00:26:47.389 }, 00:26:47.389 { 00:26:47.389 "name": "BaseBdev4", 00:26:47.389 "uuid": "b5c20394-dbe6-5dbe-9f94-3be9095c703a", 00:26:47.389 "is_configured": true, 00:26:47.389 "data_offset": 2048, 00:26:47.389 "data_size": 63488 00:26:47.389 } 00:26:47.389 ] 00:26:47.389 }' 00:26:47.389 12:58:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:47.389 12:58:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:47.389 12:58:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:47.389 12:58:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:47.389 12:58:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:48.764 12:58:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:48.764 12:58:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:48.764 12:58:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:48.764 12:58:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:48.764 12:58:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:48.764 12:58:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:48.764 12:58:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:48.764 12:58:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:48.764 12:58:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.764 12:58:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:48.764 12:58:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.764 12:58:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:48.764 "name": "raid_bdev1", 00:26:48.764 "uuid": "a1bd6568-b6db-46d0-be55-8d3436273678", 00:26:48.764 "strip_size_kb": 64, 00:26:48.764 "state": "online", 00:26:48.764 "raid_level": "raid5f", 00:26:48.764 "superblock": true, 00:26:48.764 "num_base_bdevs": 4, 00:26:48.764 "num_base_bdevs_discovered": 4, 00:26:48.764 "num_base_bdevs_operational": 4, 00:26:48.764 "process": { 00:26:48.764 "type": "rebuild", 00:26:48.764 "target": "spare", 00:26:48.764 "progress": { 00:26:48.764 "blocks": 145920, 00:26:48.764 "percent": 76 00:26:48.764 } 00:26:48.764 }, 00:26:48.764 "base_bdevs_list": [ 00:26:48.764 { 00:26:48.764 "name": "spare", 00:26:48.764 "uuid": "784171fb-e490-52ea-ac36-5eac5dbbe902", 00:26:48.764 "is_configured": true, 00:26:48.764 "data_offset": 2048, 00:26:48.764 "data_size": 63488 00:26:48.764 }, 00:26:48.764 { 00:26:48.764 "name": "BaseBdev2", 00:26:48.764 "uuid": "a9418342-ec68-5718-9b22-7c19cf7b8946", 00:26:48.764 "is_configured": true, 00:26:48.764 "data_offset": 2048, 00:26:48.764 "data_size": 63488 00:26:48.764 }, 00:26:48.764 { 00:26:48.764 "name": "BaseBdev3", 00:26:48.764 "uuid": "8b47a3a3-65bd-5337-85fe-115aa3542226", 00:26:48.764 "is_configured": true, 00:26:48.764 "data_offset": 2048, 00:26:48.764 "data_size": 63488 00:26:48.764 }, 00:26:48.764 { 00:26:48.764 "name": "BaseBdev4", 00:26:48.764 "uuid": "b5c20394-dbe6-5dbe-9f94-3be9095c703a", 00:26:48.764 "is_configured": true, 00:26:48.764 "data_offset": 2048, 00:26:48.764 "data_size": 63488 00:26:48.764 } 00:26:48.764 ] 00:26:48.764 }' 00:26:48.764 12:58:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:48.764 12:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:48.764 12:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:48.764 12:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:48.764 12:58:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:49.698 12:58:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:49.698 12:58:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:49.698 12:58:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:49.698 12:58:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:49.698 12:58:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:49.698 12:58:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:49.698 12:58:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:49.698 12:58:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:49.698 12:58:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.698 12:58:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.698 12:58:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.698 12:58:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:49.698 "name": "raid_bdev1", 00:26:49.698 "uuid": "a1bd6568-b6db-46d0-be55-8d3436273678", 00:26:49.698 "strip_size_kb": 64, 00:26:49.698 "state": "online", 00:26:49.698 "raid_level": "raid5f", 00:26:49.698 "superblock": true, 00:26:49.698 "num_base_bdevs": 4, 00:26:49.698 "num_base_bdevs_discovered": 4, 00:26:49.698 "num_base_bdevs_operational": 4, 00:26:49.698 "process": { 00:26:49.698 "type": "rebuild", 00:26:49.698 "target": "spare", 00:26:49.698 "progress": { 00:26:49.698 "blocks": 167040, 00:26:49.698 "percent": 87 00:26:49.698 } 00:26:49.698 }, 00:26:49.698 "base_bdevs_list": [ 00:26:49.698 { 00:26:49.698 "name": "spare", 00:26:49.698 "uuid": "784171fb-e490-52ea-ac36-5eac5dbbe902", 00:26:49.698 "is_configured": true, 00:26:49.698 "data_offset": 2048, 00:26:49.698 "data_size": 63488 00:26:49.698 }, 00:26:49.698 { 00:26:49.698 "name": "BaseBdev2", 00:26:49.698 "uuid": "a9418342-ec68-5718-9b22-7c19cf7b8946", 00:26:49.698 "is_configured": true, 00:26:49.698 "data_offset": 2048, 00:26:49.698 "data_size": 63488 00:26:49.698 }, 00:26:49.698 { 00:26:49.698 "name": "BaseBdev3", 00:26:49.698 "uuid": "8b47a3a3-65bd-5337-85fe-115aa3542226", 00:26:49.698 "is_configured": true, 00:26:49.698 "data_offset": 2048, 00:26:49.698 "data_size": 63488 00:26:49.698 }, 00:26:49.698 { 00:26:49.698 "name": "BaseBdev4", 00:26:49.698 "uuid": "b5c20394-dbe6-5dbe-9f94-3be9095c703a", 00:26:49.698 "is_configured": true, 00:26:49.698 "data_offset": 2048, 00:26:49.698 "data_size": 63488 00:26:49.698 } 00:26:49.698 ] 00:26:49.698 }' 00:26:49.698 12:58:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:49.698 12:58:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:49.698 12:58:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:49.698 12:58:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:49.698 12:58:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:50.632 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:50.632 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:50.632 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:50.632 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:50.632 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:50.632 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:50.632 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:50.632 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:50.632 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.632 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:50.632 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.632 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:50.632 "name": "raid_bdev1", 00:26:50.632 "uuid": "a1bd6568-b6db-46d0-be55-8d3436273678", 00:26:50.632 "strip_size_kb": 64, 00:26:50.632 "state": "online", 00:26:50.632 "raid_level": "raid5f", 00:26:50.632 "superblock": true, 00:26:50.632 "num_base_bdevs": 4, 00:26:50.632 "num_base_bdevs_discovered": 4, 00:26:50.632 "num_base_bdevs_operational": 4, 00:26:50.632 "process": { 00:26:50.632 "type": "rebuild", 00:26:50.632 "target": "spare", 00:26:50.632 "progress": { 00:26:50.632 "blocks": 188160, 00:26:50.632 "percent": 98 00:26:50.632 } 00:26:50.632 }, 00:26:50.632 "base_bdevs_list": [ 00:26:50.632 { 00:26:50.632 "name": "spare", 00:26:50.632 "uuid": "784171fb-e490-52ea-ac36-5eac5dbbe902", 00:26:50.632 "is_configured": true, 00:26:50.632 "data_offset": 2048, 00:26:50.632 "data_size": 63488 00:26:50.632 }, 00:26:50.632 { 00:26:50.632 "name": "BaseBdev2", 00:26:50.632 "uuid": "a9418342-ec68-5718-9b22-7c19cf7b8946", 00:26:50.632 "is_configured": true, 00:26:50.632 "data_offset": 2048, 00:26:50.632 "data_size": 63488 00:26:50.632 }, 00:26:50.632 { 00:26:50.632 "name": "BaseBdev3", 00:26:50.632 "uuid": "8b47a3a3-65bd-5337-85fe-115aa3542226", 00:26:50.632 "is_configured": true, 00:26:50.632 "data_offset": 2048, 00:26:50.632 "data_size": 63488 00:26:50.632 }, 00:26:50.632 { 00:26:50.632 "name": "BaseBdev4", 00:26:50.632 "uuid": "b5c20394-dbe6-5dbe-9f94-3be9095c703a", 00:26:50.632 "is_configured": true, 00:26:50.632 "data_offset": 2048, 00:26:50.632 "data_size": 63488 00:26:50.632 } 00:26:50.632 ] 00:26:50.632 }' 00:26:50.632 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:50.632 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:50.632 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:50.890 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:50.890 12:58:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:50.890 [2024-12-05 12:58:33.304781] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:50.890 [2024-12-05 12:58:33.304845] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:50.890 [2024-12-05 12:58:33.304966] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:51.822 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:51.822 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:51.822 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:51.822 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:51.822 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:51.822 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:51.822 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:51.822 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:51.822 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.822 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:51.822 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.822 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:51.822 "name": "raid_bdev1", 00:26:51.822 "uuid": "a1bd6568-b6db-46d0-be55-8d3436273678", 00:26:51.822 "strip_size_kb": 64, 00:26:51.822 "state": "online", 00:26:51.822 "raid_level": "raid5f", 00:26:51.822 "superblock": true, 00:26:51.822 "num_base_bdevs": 4, 00:26:51.822 "num_base_bdevs_discovered": 4, 00:26:51.822 "num_base_bdevs_operational": 4, 00:26:51.822 "base_bdevs_list": [ 00:26:51.822 { 00:26:51.822 "name": "spare", 00:26:51.822 "uuid": "784171fb-e490-52ea-ac36-5eac5dbbe902", 00:26:51.822 "is_configured": true, 00:26:51.822 "data_offset": 2048, 00:26:51.822 "data_size": 63488 00:26:51.822 }, 00:26:51.822 { 00:26:51.822 "name": "BaseBdev2", 00:26:51.822 "uuid": "a9418342-ec68-5718-9b22-7c19cf7b8946", 00:26:51.822 "is_configured": true, 00:26:51.822 "data_offset": 2048, 00:26:51.822 "data_size": 63488 00:26:51.822 }, 00:26:51.822 { 00:26:51.822 "name": "BaseBdev3", 00:26:51.822 "uuid": "8b47a3a3-65bd-5337-85fe-115aa3542226", 00:26:51.822 "is_configured": true, 00:26:51.822 "data_offset": 2048, 00:26:51.822 "data_size": 63488 00:26:51.822 }, 00:26:51.822 { 00:26:51.822 "name": "BaseBdev4", 00:26:51.822 "uuid": "b5c20394-dbe6-5dbe-9f94-3be9095c703a", 00:26:51.822 "is_configured": true, 00:26:51.822 "data_offset": 2048, 00:26:51.822 "data_size": 63488 00:26:51.822 } 00:26:51.822 ] 00:26:51.822 }' 00:26:51.822 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:51.822 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:51.822 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:51.822 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:26:51.822 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:26:51.822 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:51.822 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:51.822 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:51.822 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:51.822 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:51.822 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:51.822 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:51.822 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.822 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:51.822 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.822 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:51.822 "name": "raid_bdev1", 00:26:51.822 "uuid": "a1bd6568-b6db-46d0-be55-8d3436273678", 00:26:51.822 "strip_size_kb": 64, 00:26:51.822 "state": "online", 00:26:51.822 "raid_level": "raid5f", 00:26:51.822 "superblock": true, 00:26:51.822 "num_base_bdevs": 4, 00:26:51.822 "num_base_bdevs_discovered": 4, 00:26:51.822 "num_base_bdevs_operational": 4, 00:26:51.822 "base_bdevs_list": [ 00:26:51.822 { 00:26:51.822 "name": "spare", 00:26:51.822 "uuid": "784171fb-e490-52ea-ac36-5eac5dbbe902", 00:26:51.822 "is_configured": true, 00:26:51.822 "data_offset": 2048, 00:26:51.822 "data_size": 63488 00:26:51.822 }, 00:26:51.822 { 00:26:51.822 "name": "BaseBdev2", 00:26:51.822 "uuid": "a9418342-ec68-5718-9b22-7c19cf7b8946", 00:26:51.822 "is_configured": true, 00:26:51.822 "data_offset": 2048, 00:26:51.822 "data_size": 63488 00:26:51.822 }, 00:26:51.822 { 00:26:51.822 "name": "BaseBdev3", 00:26:51.822 "uuid": "8b47a3a3-65bd-5337-85fe-115aa3542226", 00:26:51.822 "is_configured": true, 00:26:51.822 "data_offset": 2048, 00:26:51.822 "data_size": 63488 00:26:51.822 }, 00:26:51.822 { 00:26:51.822 "name": "BaseBdev4", 00:26:51.822 "uuid": "b5c20394-dbe6-5dbe-9f94-3be9095c703a", 00:26:51.822 "is_configured": true, 00:26:51.822 "data_offset": 2048, 00:26:51.822 "data_size": 63488 00:26:51.822 } 00:26:51.822 ] 00:26:51.822 }' 00:26:51.822 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:52.209 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:52.209 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:52.209 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:52.209 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:52.210 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:52.210 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:52.210 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:52.210 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:52.210 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:52.210 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:52.210 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:52.210 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:52.210 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:52.210 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:52.210 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:52.210 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.210 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:52.210 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.210 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:52.210 "name": "raid_bdev1", 00:26:52.210 "uuid": "a1bd6568-b6db-46d0-be55-8d3436273678", 00:26:52.210 "strip_size_kb": 64, 00:26:52.210 "state": "online", 00:26:52.210 "raid_level": "raid5f", 00:26:52.210 "superblock": true, 00:26:52.210 "num_base_bdevs": 4, 00:26:52.210 "num_base_bdevs_discovered": 4, 00:26:52.210 "num_base_bdevs_operational": 4, 00:26:52.210 "base_bdevs_list": [ 00:26:52.210 { 00:26:52.210 "name": "spare", 00:26:52.210 "uuid": "784171fb-e490-52ea-ac36-5eac5dbbe902", 00:26:52.210 "is_configured": true, 00:26:52.210 "data_offset": 2048, 00:26:52.210 "data_size": 63488 00:26:52.210 }, 00:26:52.210 { 00:26:52.210 "name": "BaseBdev2", 00:26:52.210 "uuid": "a9418342-ec68-5718-9b22-7c19cf7b8946", 00:26:52.210 "is_configured": true, 00:26:52.210 "data_offset": 2048, 00:26:52.210 "data_size": 63488 00:26:52.210 }, 00:26:52.210 { 00:26:52.210 "name": "BaseBdev3", 00:26:52.210 "uuid": "8b47a3a3-65bd-5337-85fe-115aa3542226", 00:26:52.210 "is_configured": true, 00:26:52.210 "data_offset": 2048, 00:26:52.210 "data_size": 63488 00:26:52.210 }, 00:26:52.210 { 00:26:52.210 "name": "BaseBdev4", 00:26:52.210 "uuid": "b5c20394-dbe6-5dbe-9f94-3be9095c703a", 00:26:52.210 "is_configured": true, 00:26:52.210 "data_offset": 2048, 00:26:52.210 "data_size": 63488 00:26:52.210 } 00:26:52.210 ] 00:26:52.210 }' 00:26:52.210 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:52.210 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:52.210 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:52.210 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.210 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:52.210 [2024-12-05 12:58:34.753723] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:52.210 [2024-12-05 12:58:34.753749] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:52.210 [2024-12-05 12:58:34.753815] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:52.210 [2024-12-05 12:58:34.753894] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:52.210 [2024-12-05 12:58:34.753904] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:26:52.210 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.210 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:52.210 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.210 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:52.210 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:26:52.470 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.470 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:26:52.470 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:26:52.470 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:26:52.470 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:26:52.470 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:26:52.470 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:26:52.470 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:52.470 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:52.470 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:52.470 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:26:52.470 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:52.470 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:52.470 12:58:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:26:52.470 /dev/nbd0 00:26:52.470 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:52.470 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:52.470 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:52.470 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:26:52.470 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:52.470 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:52.470 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:52.470 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:26:52.470 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:52.470 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:52.470 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:52.470 1+0 records in 00:26:52.470 1+0 records out 00:26:52.470 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199004 s, 20.6 MB/s 00:26:52.470 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:52.470 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:26:52.470 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:52.470 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:52.470 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:26:52.470 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:52.470 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:52.470 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:26:52.728 /dev/nbd1 00:26:52.728 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:52.728 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:52.728 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:26:52.728 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:26:52.728 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:52.728 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:52.728 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:26:52.728 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:26:52.728 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:52.728 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:52.728 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:52.728 1+0 records in 00:26:52.728 1+0 records out 00:26:52.728 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000164993 s, 24.8 MB/s 00:26:52.728 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:52.728 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:26:52.728 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:52.728 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:52.728 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:26:52.728 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:52.728 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:52.728 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:26:52.986 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:26:52.986 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:26:52.986 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:52.986 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:52.986 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:26:52.986 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:52.986 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:26:53.243 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:53.243 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:53.243 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:53.243 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:53.243 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:53.243 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:53.243 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:26:53.243 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:26:53.243 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:53.243 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:26:53.243 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:53.243 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:53.243 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:53.243 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:53.243 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:53.243 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:53.243 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:26:53.243 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:26:53.243 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:26:53.243 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:26:53.243 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.243 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:53.243 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.243 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:26:53.243 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.243 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:53.243 [2024-12-05 12:58:35.808903] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:53.243 [2024-12-05 12:58:35.808948] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:53.243 [2024-12-05 12:58:35.808968] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:26:53.243 [2024-12-05 12:58:35.808976] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:53.243 [2024-12-05 12:58:35.810841] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:53.243 [2024-12-05 12:58:35.810873] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:53.243 [2024-12-05 12:58:35.810950] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:53.243 [2024-12-05 12:58:35.810989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:53.243 [2024-12-05 12:58:35.811095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:53.243 [2024-12-05 12:58:35.811167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:53.243 [2024-12-05 12:58:35.811229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:53.243 spare 00:26:53.243 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.243 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:26:53.243 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.243 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:53.500 [2024-12-05 12:58:35.911308] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:26:53.500 [2024-12-05 12:58:35.911346] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:53.500 [2024-12-05 12:58:35.911600] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:26:53.500 [2024-12-05 12:58:35.915316] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:26:53.500 [2024-12-05 12:58:35.915337] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:26:53.500 [2024-12-05 12:58:35.915504] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:53.500 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.500 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:53.500 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:53.500 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:53.500 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:53.500 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:53.500 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:53.500 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:53.500 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:53.500 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:53.500 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:53.500 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:53.500 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:53.500 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.500 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:53.500 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.500 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:53.500 "name": "raid_bdev1", 00:26:53.500 "uuid": "a1bd6568-b6db-46d0-be55-8d3436273678", 00:26:53.500 "strip_size_kb": 64, 00:26:53.500 "state": "online", 00:26:53.500 "raid_level": "raid5f", 00:26:53.500 "superblock": true, 00:26:53.500 "num_base_bdevs": 4, 00:26:53.500 "num_base_bdevs_discovered": 4, 00:26:53.500 "num_base_bdevs_operational": 4, 00:26:53.500 "base_bdevs_list": [ 00:26:53.500 { 00:26:53.500 "name": "spare", 00:26:53.500 "uuid": "784171fb-e490-52ea-ac36-5eac5dbbe902", 00:26:53.500 "is_configured": true, 00:26:53.500 "data_offset": 2048, 00:26:53.500 "data_size": 63488 00:26:53.500 }, 00:26:53.500 { 00:26:53.500 "name": "BaseBdev2", 00:26:53.500 "uuid": "a9418342-ec68-5718-9b22-7c19cf7b8946", 00:26:53.500 "is_configured": true, 00:26:53.500 "data_offset": 2048, 00:26:53.500 "data_size": 63488 00:26:53.500 }, 00:26:53.500 { 00:26:53.500 "name": "BaseBdev3", 00:26:53.500 "uuid": "8b47a3a3-65bd-5337-85fe-115aa3542226", 00:26:53.500 "is_configured": true, 00:26:53.500 "data_offset": 2048, 00:26:53.500 "data_size": 63488 00:26:53.500 }, 00:26:53.500 { 00:26:53.500 "name": "BaseBdev4", 00:26:53.500 "uuid": "b5c20394-dbe6-5dbe-9f94-3be9095c703a", 00:26:53.500 "is_configured": true, 00:26:53.500 "data_offset": 2048, 00:26:53.500 "data_size": 63488 00:26:53.500 } 00:26:53.500 ] 00:26:53.500 }' 00:26:53.500 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:53.500 12:58:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:53.759 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:53.759 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:53.759 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:53.759 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:53.759 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:53.759 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:53.759 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:53.759 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.759 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:53.759 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.759 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:53.759 "name": "raid_bdev1", 00:26:53.759 "uuid": "a1bd6568-b6db-46d0-be55-8d3436273678", 00:26:53.759 "strip_size_kb": 64, 00:26:53.759 "state": "online", 00:26:53.759 "raid_level": "raid5f", 00:26:53.759 "superblock": true, 00:26:53.759 "num_base_bdevs": 4, 00:26:53.759 "num_base_bdevs_discovered": 4, 00:26:53.759 "num_base_bdevs_operational": 4, 00:26:53.759 "base_bdevs_list": [ 00:26:53.759 { 00:26:53.759 "name": "spare", 00:26:53.759 "uuid": "784171fb-e490-52ea-ac36-5eac5dbbe902", 00:26:53.759 "is_configured": true, 00:26:53.759 "data_offset": 2048, 00:26:53.759 "data_size": 63488 00:26:53.759 }, 00:26:53.759 { 00:26:53.759 "name": "BaseBdev2", 00:26:53.759 "uuid": "a9418342-ec68-5718-9b22-7c19cf7b8946", 00:26:53.759 "is_configured": true, 00:26:53.759 "data_offset": 2048, 00:26:53.759 "data_size": 63488 00:26:53.759 }, 00:26:53.759 { 00:26:53.759 "name": "BaseBdev3", 00:26:53.759 "uuid": "8b47a3a3-65bd-5337-85fe-115aa3542226", 00:26:53.759 "is_configured": true, 00:26:53.759 "data_offset": 2048, 00:26:53.759 "data_size": 63488 00:26:53.759 }, 00:26:53.759 { 00:26:53.759 "name": "BaseBdev4", 00:26:53.759 "uuid": "b5c20394-dbe6-5dbe-9f94-3be9095c703a", 00:26:53.759 "is_configured": true, 00:26:53.759 "data_offset": 2048, 00:26:53.759 "data_size": 63488 00:26:53.759 } 00:26:53.759 ] 00:26:53.759 }' 00:26:53.759 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:53.759 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:53.759 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:53.759 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:53.759 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:26:53.759 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:53.759 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.759 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:53.759 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.759 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:26:53.759 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:26:53.759 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.759 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.017 [2024-12-05 12:58:36.343860] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:54.017 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.017 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:54.017 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:54.017 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:54.017 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:54.017 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:54.017 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:54.017 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:54.017 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:54.017 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:54.017 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:54.017 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:54.017 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:54.017 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.017 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.017 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.017 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:54.017 "name": "raid_bdev1", 00:26:54.017 "uuid": "a1bd6568-b6db-46d0-be55-8d3436273678", 00:26:54.017 "strip_size_kb": 64, 00:26:54.017 "state": "online", 00:26:54.017 "raid_level": "raid5f", 00:26:54.017 "superblock": true, 00:26:54.017 "num_base_bdevs": 4, 00:26:54.017 "num_base_bdevs_discovered": 3, 00:26:54.017 "num_base_bdevs_operational": 3, 00:26:54.017 "base_bdevs_list": [ 00:26:54.017 { 00:26:54.017 "name": null, 00:26:54.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:54.017 "is_configured": false, 00:26:54.017 "data_offset": 0, 00:26:54.017 "data_size": 63488 00:26:54.017 }, 00:26:54.017 { 00:26:54.017 "name": "BaseBdev2", 00:26:54.017 "uuid": "a9418342-ec68-5718-9b22-7c19cf7b8946", 00:26:54.017 "is_configured": true, 00:26:54.017 "data_offset": 2048, 00:26:54.017 "data_size": 63488 00:26:54.017 }, 00:26:54.017 { 00:26:54.017 "name": "BaseBdev3", 00:26:54.017 "uuid": "8b47a3a3-65bd-5337-85fe-115aa3542226", 00:26:54.017 "is_configured": true, 00:26:54.017 "data_offset": 2048, 00:26:54.018 "data_size": 63488 00:26:54.018 }, 00:26:54.018 { 00:26:54.018 "name": "BaseBdev4", 00:26:54.018 "uuid": "b5c20394-dbe6-5dbe-9f94-3be9095c703a", 00:26:54.018 "is_configured": true, 00:26:54.018 "data_offset": 2048, 00:26:54.018 "data_size": 63488 00:26:54.018 } 00:26:54.018 ] 00:26:54.018 }' 00:26:54.018 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:54.018 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.276 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:26:54.276 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.276 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.276 [2024-12-05 12:58:36.643930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:54.276 [2024-12-05 12:58:36.644074] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:26:54.276 [2024-12-05 12:58:36.644090] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:54.276 [2024-12-05 12:58:36.644117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:54.276 [2024-12-05 12:58:36.651892] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:26:54.276 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.276 12:58:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:26:54.276 [2024-12-05 12:58:36.657302] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:55.211 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:55.211 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:55.211 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:55.211 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:55.211 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:55.211 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:55.211 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:55.211 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.211 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.211 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.211 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:55.211 "name": "raid_bdev1", 00:26:55.211 "uuid": "a1bd6568-b6db-46d0-be55-8d3436273678", 00:26:55.211 "strip_size_kb": 64, 00:26:55.211 "state": "online", 00:26:55.211 "raid_level": "raid5f", 00:26:55.211 "superblock": true, 00:26:55.211 "num_base_bdevs": 4, 00:26:55.211 "num_base_bdevs_discovered": 4, 00:26:55.211 "num_base_bdevs_operational": 4, 00:26:55.211 "process": { 00:26:55.211 "type": "rebuild", 00:26:55.211 "target": "spare", 00:26:55.211 "progress": { 00:26:55.211 "blocks": 19200, 00:26:55.211 "percent": 10 00:26:55.211 } 00:26:55.211 }, 00:26:55.211 "base_bdevs_list": [ 00:26:55.211 { 00:26:55.211 "name": "spare", 00:26:55.211 "uuid": "784171fb-e490-52ea-ac36-5eac5dbbe902", 00:26:55.211 "is_configured": true, 00:26:55.211 "data_offset": 2048, 00:26:55.211 "data_size": 63488 00:26:55.211 }, 00:26:55.211 { 00:26:55.211 "name": "BaseBdev2", 00:26:55.211 "uuid": "a9418342-ec68-5718-9b22-7c19cf7b8946", 00:26:55.211 "is_configured": true, 00:26:55.211 "data_offset": 2048, 00:26:55.211 "data_size": 63488 00:26:55.211 }, 00:26:55.211 { 00:26:55.211 "name": "BaseBdev3", 00:26:55.211 "uuid": "8b47a3a3-65bd-5337-85fe-115aa3542226", 00:26:55.211 "is_configured": true, 00:26:55.211 "data_offset": 2048, 00:26:55.211 "data_size": 63488 00:26:55.211 }, 00:26:55.211 { 00:26:55.211 "name": "BaseBdev4", 00:26:55.211 "uuid": "b5c20394-dbe6-5dbe-9f94-3be9095c703a", 00:26:55.211 "is_configured": true, 00:26:55.211 "data_offset": 2048, 00:26:55.211 "data_size": 63488 00:26:55.211 } 00:26:55.211 ] 00:26:55.211 }' 00:26:55.211 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:55.211 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:55.212 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:55.212 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:55.212 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:26:55.212 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.212 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.212 [2024-12-05 12:58:37.758072] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:55.212 [2024-12-05 12:58:37.764549] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:55.212 [2024-12-05 12:58:37.764614] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:55.212 [2024-12-05 12:58:37.764628] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:55.212 [2024-12-05 12:58:37.764635] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:55.212 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.212 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:55.212 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:55.212 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:55.212 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:55.212 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:55.212 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:55.212 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:55.212 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:55.212 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:55.212 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:55.212 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:55.212 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:55.212 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.212 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.470 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.470 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:55.470 "name": "raid_bdev1", 00:26:55.470 "uuid": "a1bd6568-b6db-46d0-be55-8d3436273678", 00:26:55.470 "strip_size_kb": 64, 00:26:55.470 "state": "online", 00:26:55.470 "raid_level": "raid5f", 00:26:55.470 "superblock": true, 00:26:55.470 "num_base_bdevs": 4, 00:26:55.470 "num_base_bdevs_discovered": 3, 00:26:55.470 "num_base_bdevs_operational": 3, 00:26:55.470 "base_bdevs_list": [ 00:26:55.470 { 00:26:55.470 "name": null, 00:26:55.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:55.470 "is_configured": false, 00:26:55.470 "data_offset": 0, 00:26:55.470 "data_size": 63488 00:26:55.470 }, 00:26:55.470 { 00:26:55.470 "name": "BaseBdev2", 00:26:55.470 "uuid": "a9418342-ec68-5718-9b22-7c19cf7b8946", 00:26:55.470 "is_configured": true, 00:26:55.470 "data_offset": 2048, 00:26:55.470 "data_size": 63488 00:26:55.470 }, 00:26:55.470 { 00:26:55.470 "name": "BaseBdev3", 00:26:55.470 "uuid": "8b47a3a3-65bd-5337-85fe-115aa3542226", 00:26:55.470 "is_configured": true, 00:26:55.470 "data_offset": 2048, 00:26:55.470 "data_size": 63488 00:26:55.470 }, 00:26:55.470 { 00:26:55.470 "name": "BaseBdev4", 00:26:55.470 "uuid": "b5c20394-dbe6-5dbe-9f94-3be9095c703a", 00:26:55.470 "is_configured": true, 00:26:55.470 "data_offset": 2048, 00:26:55.470 "data_size": 63488 00:26:55.470 } 00:26:55.470 ] 00:26:55.470 }' 00:26:55.470 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:55.470 12:58:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.728 12:58:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:26:55.728 12:58:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.728 12:58:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.728 [2024-12-05 12:58:38.112852] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:55.728 [2024-12-05 12:58:38.112910] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:55.728 [2024-12-05 12:58:38.112931] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:26:55.728 [2024-12-05 12:58:38.112940] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:55.728 [2024-12-05 12:58:38.113314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:55.728 [2024-12-05 12:58:38.113327] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:55.728 [2024-12-05 12:58:38.113397] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:55.728 [2024-12-05 12:58:38.113408] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:26:55.728 [2024-12-05 12:58:38.113416] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:55.728 [2024-12-05 12:58:38.113436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:55.728 [2024-12-05 12:58:38.121023] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:26:55.728 spare 00:26:55.728 12:58:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.728 12:58:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:26:55.728 [2024-12-05 12:58:38.126018] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:56.659 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:56.659 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:56.659 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:56.659 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:56.659 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:56.659 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:56.659 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.659 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:56.659 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:56.659 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.659 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:56.659 "name": "raid_bdev1", 00:26:56.659 "uuid": "a1bd6568-b6db-46d0-be55-8d3436273678", 00:26:56.659 "strip_size_kb": 64, 00:26:56.659 "state": "online", 00:26:56.659 "raid_level": "raid5f", 00:26:56.659 "superblock": true, 00:26:56.659 "num_base_bdevs": 4, 00:26:56.659 "num_base_bdevs_discovered": 4, 00:26:56.659 "num_base_bdevs_operational": 4, 00:26:56.659 "process": { 00:26:56.659 "type": "rebuild", 00:26:56.659 "target": "spare", 00:26:56.659 "progress": { 00:26:56.659 "blocks": 19200, 00:26:56.659 "percent": 10 00:26:56.659 } 00:26:56.659 }, 00:26:56.659 "base_bdevs_list": [ 00:26:56.659 { 00:26:56.659 "name": "spare", 00:26:56.659 "uuid": "784171fb-e490-52ea-ac36-5eac5dbbe902", 00:26:56.659 "is_configured": true, 00:26:56.659 "data_offset": 2048, 00:26:56.659 "data_size": 63488 00:26:56.659 }, 00:26:56.659 { 00:26:56.659 "name": "BaseBdev2", 00:26:56.659 "uuid": "a9418342-ec68-5718-9b22-7c19cf7b8946", 00:26:56.659 "is_configured": true, 00:26:56.659 "data_offset": 2048, 00:26:56.659 "data_size": 63488 00:26:56.659 }, 00:26:56.659 { 00:26:56.659 "name": "BaseBdev3", 00:26:56.659 "uuid": "8b47a3a3-65bd-5337-85fe-115aa3542226", 00:26:56.659 "is_configured": true, 00:26:56.659 "data_offset": 2048, 00:26:56.659 "data_size": 63488 00:26:56.659 }, 00:26:56.659 { 00:26:56.659 "name": "BaseBdev4", 00:26:56.659 "uuid": "b5c20394-dbe6-5dbe-9f94-3be9095c703a", 00:26:56.659 "is_configured": true, 00:26:56.659 "data_offset": 2048, 00:26:56.659 "data_size": 63488 00:26:56.659 } 00:26:56.659 ] 00:26:56.660 }' 00:26:56.660 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:56.660 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:56.660 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:56.660 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:56.660 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:26:56.660 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.660 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:56.660 [2024-12-05 12:58:39.218928] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:56.660 [2024-12-05 12:58:39.234000] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:56.660 [2024-12-05 12:58:39.234062] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:56.660 [2024-12-05 12:58:39.234082] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:56.660 [2024-12-05 12:58:39.234090] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:56.918 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.918 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:56.918 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:56.918 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:56.918 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:56.918 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:56.918 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:56.918 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:56.918 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:56.918 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:56.918 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:56.918 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:56.918 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.918 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:56.918 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:56.918 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.918 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:56.918 "name": "raid_bdev1", 00:26:56.918 "uuid": "a1bd6568-b6db-46d0-be55-8d3436273678", 00:26:56.918 "strip_size_kb": 64, 00:26:56.918 "state": "online", 00:26:56.918 "raid_level": "raid5f", 00:26:56.918 "superblock": true, 00:26:56.918 "num_base_bdevs": 4, 00:26:56.918 "num_base_bdevs_discovered": 3, 00:26:56.918 "num_base_bdevs_operational": 3, 00:26:56.918 "base_bdevs_list": [ 00:26:56.918 { 00:26:56.918 "name": null, 00:26:56.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:56.918 "is_configured": false, 00:26:56.918 "data_offset": 0, 00:26:56.918 "data_size": 63488 00:26:56.918 }, 00:26:56.918 { 00:26:56.918 "name": "BaseBdev2", 00:26:56.918 "uuid": "a9418342-ec68-5718-9b22-7c19cf7b8946", 00:26:56.918 "is_configured": true, 00:26:56.918 "data_offset": 2048, 00:26:56.918 "data_size": 63488 00:26:56.918 }, 00:26:56.918 { 00:26:56.918 "name": "BaseBdev3", 00:26:56.918 "uuid": "8b47a3a3-65bd-5337-85fe-115aa3542226", 00:26:56.918 "is_configured": true, 00:26:56.918 "data_offset": 2048, 00:26:56.918 "data_size": 63488 00:26:56.918 }, 00:26:56.918 { 00:26:56.918 "name": "BaseBdev4", 00:26:56.918 "uuid": "b5c20394-dbe6-5dbe-9f94-3be9095c703a", 00:26:56.918 "is_configured": true, 00:26:56.918 "data_offset": 2048, 00:26:56.918 "data_size": 63488 00:26:56.918 } 00:26:56.918 ] 00:26:56.918 }' 00:26:56.918 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:56.918 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.175 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:57.175 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:57.175 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:57.175 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:57.175 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:57.175 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:57.175 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:57.175 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.175 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.175 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.175 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:57.175 "name": "raid_bdev1", 00:26:57.175 "uuid": "a1bd6568-b6db-46d0-be55-8d3436273678", 00:26:57.175 "strip_size_kb": 64, 00:26:57.175 "state": "online", 00:26:57.175 "raid_level": "raid5f", 00:26:57.175 "superblock": true, 00:26:57.175 "num_base_bdevs": 4, 00:26:57.175 "num_base_bdevs_discovered": 3, 00:26:57.175 "num_base_bdevs_operational": 3, 00:26:57.175 "base_bdevs_list": [ 00:26:57.175 { 00:26:57.175 "name": null, 00:26:57.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:57.175 "is_configured": false, 00:26:57.175 "data_offset": 0, 00:26:57.175 "data_size": 63488 00:26:57.175 }, 00:26:57.175 { 00:26:57.175 "name": "BaseBdev2", 00:26:57.175 "uuid": "a9418342-ec68-5718-9b22-7c19cf7b8946", 00:26:57.175 "is_configured": true, 00:26:57.175 "data_offset": 2048, 00:26:57.175 "data_size": 63488 00:26:57.175 }, 00:26:57.175 { 00:26:57.175 "name": "BaseBdev3", 00:26:57.175 "uuid": "8b47a3a3-65bd-5337-85fe-115aa3542226", 00:26:57.175 "is_configured": true, 00:26:57.175 "data_offset": 2048, 00:26:57.175 "data_size": 63488 00:26:57.175 }, 00:26:57.175 { 00:26:57.175 "name": "BaseBdev4", 00:26:57.175 "uuid": "b5c20394-dbe6-5dbe-9f94-3be9095c703a", 00:26:57.175 "is_configured": true, 00:26:57.175 "data_offset": 2048, 00:26:57.175 "data_size": 63488 00:26:57.175 } 00:26:57.175 ] 00:26:57.175 }' 00:26:57.175 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:57.175 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:57.175 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:57.175 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:57.175 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:26:57.175 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.175 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.175 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.175 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:57.175 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.175 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.175 [2024-12-05 12:58:39.685459] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:57.175 [2024-12-05 12:58:39.685522] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:57.175 [2024-12-05 12:58:39.685544] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:26:57.175 [2024-12-05 12:58:39.685555] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:57.175 [2024-12-05 12:58:39.686008] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:57.175 [2024-12-05 12:58:39.686031] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:57.175 [2024-12-05 12:58:39.686107] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:26:57.175 [2024-12-05 12:58:39.686121] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:26:57.175 [2024-12-05 12:58:39.686132] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:57.175 [2024-12-05 12:58:39.686141] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:26:57.175 BaseBdev1 00:26:57.175 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.175 12:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:26:58.546 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:58.546 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:58.546 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:58.546 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:58.546 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:58.546 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:58.546 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:58.546 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:58.546 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:58.546 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:58.546 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:58.546 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:58.546 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.546 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:58.546 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.546 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:58.546 "name": "raid_bdev1", 00:26:58.546 "uuid": "a1bd6568-b6db-46d0-be55-8d3436273678", 00:26:58.546 "strip_size_kb": 64, 00:26:58.546 "state": "online", 00:26:58.546 "raid_level": "raid5f", 00:26:58.546 "superblock": true, 00:26:58.546 "num_base_bdevs": 4, 00:26:58.546 "num_base_bdevs_discovered": 3, 00:26:58.546 "num_base_bdevs_operational": 3, 00:26:58.546 "base_bdevs_list": [ 00:26:58.546 { 00:26:58.546 "name": null, 00:26:58.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:58.546 "is_configured": false, 00:26:58.546 "data_offset": 0, 00:26:58.546 "data_size": 63488 00:26:58.546 }, 00:26:58.546 { 00:26:58.546 "name": "BaseBdev2", 00:26:58.546 "uuid": "a9418342-ec68-5718-9b22-7c19cf7b8946", 00:26:58.546 "is_configured": true, 00:26:58.546 "data_offset": 2048, 00:26:58.546 "data_size": 63488 00:26:58.546 }, 00:26:58.546 { 00:26:58.546 "name": "BaseBdev3", 00:26:58.546 "uuid": "8b47a3a3-65bd-5337-85fe-115aa3542226", 00:26:58.546 "is_configured": true, 00:26:58.546 "data_offset": 2048, 00:26:58.546 "data_size": 63488 00:26:58.546 }, 00:26:58.546 { 00:26:58.546 "name": "BaseBdev4", 00:26:58.546 "uuid": "b5c20394-dbe6-5dbe-9f94-3be9095c703a", 00:26:58.546 "is_configured": true, 00:26:58.546 "data_offset": 2048, 00:26:58.546 "data_size": 63488 00:26:58.546 } 00:26:58.546 ] 00:26:58.546 }' 00:26:58.546 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:58.546 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:58.546 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:58.546 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:58.546 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:58.546 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:58.546 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:58.546 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:58.546 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:58.546 12:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.546 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:58.546 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.546 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:58.546 "name": "raid_bdev1", 00:26:58.546 "uuid": "a1bd6568-b6db-46d0-be55-8d3436273678", 00:26:58.546 "strip_size_kb": 64, 00:26:58.546 "state": "online", 00:26:58.546 "raid_level": "raid5f", 00:26:58.546 "superblock": true, 00:26:58.546 "num_base_bdevs": 4, 00:26:58.546 "num_base_bdevs_discovered": 3, 00:26:58.546 "num_base_bdevs_operational": 3, 00:26:58.546 "base_bdevs_list": [ 00:26:58.546 { 00:26:58.546 "name": null, 00:26:58.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:58.546 "is_configured": false, 00:26:58.546 "data_offset": 0, 00:26:58.546 "data_size": 63488 00:26:58.546 }, 00:26:58.546 { 00:26:58.546 "name": "BaseBdev2", 00:26:58.546 "uuid": "a9418342-ec68-5718-9b22-7c19cf7b8946", 00:26:58.546 "is_configured": true, 00:26:58.546 "data_offset": 2048, 00:26:58.546 "data_size": 63488 00:26:58.546 }, 00:26:58.546 { 00:26:58.546 "name": "BaseBdev3", 00:26:58.546 "uuid": "8b47a3a3-65bd-5337-85fe-115aa3542226", 00:26:58.546 "is_configured": true, 00:26:58.546 "data_offset": 2048, 00:26:58.546 "data_size": 63488 00:26:58.546 }, 00:26:58.546 { 00:26:58.546 "name": "BaseBdev4", 00:26:58.546 "uuid": "b5c20394-dbe6-5dbe-9f94-3be9095c703a", 00:26:58.546 "is_configured": true, 00:26:58.546 "data_offset": 2048, 00:26:58.546 "data_size": 63488 00:26:58.546 } 00:26:58.546 ] 00:26:58.546 }' 00:26:58.546 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:58.546 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:58.546 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:58.546 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:58.546 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:58.546 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:26:58.546 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:58.546 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:58.546 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:58.547 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:58.547 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:58.547 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:58.547 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.547 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:58.547 [2024-12-05 12:58:41.097861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:58.547 [2024-12-05 12:58:41.098018] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:26:58.547 [2024-12-05 12:58:41.098034] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:58.547 request: 00:26:58.547 { 00:26:58.547 "base_bdev": "BaseBdev1", 00:26:58.547 "raid_bdev": "raid_bdev1", 00:26:58.547 "method": "bdev_raid_add_base_bdev", 00:26:58.547 "req_id": 1 00:26:58.547 } 00:26:58.547 Got JSON-RPC error response 00:26:58.547 response: 00:26:58.547 { 00:26:58.547 "code": -22, 00:26:58.547 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:26:58.547 } 00:26:58.547 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:58.547 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:26:58.547 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:58.547 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:58.547 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:58.547 12:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:59.938 "name": "raid_bdev1", 00:26:59.938 "uuid": "a1bd6568-b6db-46d0-be55-8d3436273678", 00:26:59.938 "strip_size_kb": 64, 00:26:59.938 "state": "online", 00:26:59.938 "raid_level": "raid5f", 00:26:59.938 "superblock": true, 00:26:59.938 "num_base_bdevs": 4, 00:26:59.938 "num_base_bdevs_discovered": 3, 00:26:59.938 "num_base_bdevs_operational": 3, 00:26:59.938 "base_bdevs_list": [ 00:26:59.938 { 00:26:59.938 "name": null, 00:26:59.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:59.938 "is_configured": false, 00:26:59.938 "data_offset": 0, 00:26:59.938 "data_size": 63488 00:26:59.938 }, 00:26:59.938 { 00:26:59.938 "name": "BaseBdev2", 00:26:59.938 "uuid": "a9418342-ec68-5718-9b22-7c19cf7b8946", 00:26:59.938 "is_configured": true, 00:26:59.938 "data_offset": 2048, 00:26:59.938 "data_size": 63488 00:26:59.938 }, 00:26:59.938 { 00:26:59.938 "name": "BaseBdev3", 00:26:59.938 "uuid": "8b47a3a3-65bd-5337-85fe-115aa3542226", 00:26:59.938 "is_configured": true, 00:26:59.938 "data_offset": 2048, 00:26:59.938 "data_size": 63488 00:26:59.938 }, 00:26:59.938 { 00:26:59.938 "name": "BaseBdev4", 00:26:59.938 "uuid": "b5c20394-dbe6-5dbe-9f94-3be9095c703a", 00:26:59.938 "is_configured": true, 00:26:59.938 "data_offset": 2048, 00:26:59.938 "data_size": 63488 00:26:59.938 } 00:26:59.938 ] 00:26:59.938 }' 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:59.938 "name": "raid_bdev1", 00:26:59.938 "uuid": "a1bd6568-b6db-46d0-be55-8d3436273678", 00:26:59.938 "strip_size_kb": 64, 00:26:59.938 "state": "online", 00:26:59.938 "raid_level": "raid5f", 00:26:59.938 "superblock": true, 00:26:59.938 "num_base_bdevs": 4, 00:26:59.938 "num_base_bdevs_discovered": 3, 00:26:59.938 "num_base_bdevs_operational": 3, 00:26:59.938 "base_bdevs_list": [ 00:26:59.938 { 00:26:59.938 "name": null, 00:26:59.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:59.938 "is_configured": false, 00:26:59.938 "data_offset": 0, 00:26:59.938 "data_size": 63488 00:26:59.938 }, 00:26:59.938 { 00:26:59.938 "name": "BaseBdev2", 00:26:59.938 "uuid": "a9418342-ec68-5718-9b22-7c19cf7b8946", 00:26:59.938 "is_configured": true, 00:26:59.938 "data_offset": 2048, 00:26:59.938 "data_size": 63488 00:26:59.938 }, 00:26:59.938 { 00:26:59.938 "name": "BaseBdev3", 00:26:59.938 "uuid": "8b47a3a3-65bd-5337-85fe-115aa3542226", 00:26:59.938 "is_configured": true, 00:26:59.938 "data_offset": 2048, 00:26:59.938 "data_size": 63488 00:26:59.938 }, 00:26:59.938 { 00:26:59.938 "name": "BaseBdev4", 00:26:59.938 "uuid": "b5c20394-dbe6-5dbe-9f94-3be9095c703a", 00:26:59.938 "is_configured": true, 00:26:59.938 "data_offset": 2048, 00:26:59.938 "data_size": 63488 00:26:59.938 } 00:26:59.938 ] 00:26:59.938 }' 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82427 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82427 ']' 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82427 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:59.938 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82427 00:27:00.196 killing process with pid 82427 00:27:00.196 Received shutdown signal, test time was about 60.000000 seconds 00:27:00.196 00:27:00.196 Latency(us) 00:27:00.196 [2024-12-05T12:58:42.783Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.196 [2024-12-05T12:58:42.783Z] =================================================================================================================== 00:27:00.196 [2024-12-05T12:58:42.783Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:00.196 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:00.196 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:00.196 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82427' 00:27:00.196 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82427 00:27:00.196 [2024-12-05 12:58:42.535722] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:00.196 12:58:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82427 00:27:00.196 [2024-12-05 12:58:42.535842] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:00.196 [2024-12-05 12:58:42.535921] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:00.196 [2024-12-05 12:58:42.535937] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:27:00.454 [2024-12-05 12:58:42.836383] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:01.021 12:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:27:01.021 00:27:01.021 real 0m24.691s 00:27:01.021 user 0m29.840s 00:27:01.021 sys 0m2.235s 00:27:01.021 ************************************ 00:27:01.021 END TEST raid5f_rebuild_test_sb 00:27:01.021 ************************************ 00:27:01.021 12:58:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:01.021 12:58:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:01.021 12:58:43 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:27:01.021 12:58:43 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:27:01.021 12:58:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:27:01.021 12:58:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:01.021 12:58:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:01.279 ************************************ 00:27:01.279 START TEST raid_state_function_test_sb_4k 00:27:01.279 ************************************ 00:27:01.279 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:27:01.279 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:27:01.279 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:27:01.279 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:27:01.279 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:27:01.279 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:27:01.279 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:01.279 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:27:01.279 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:01.279 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:01.279 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:27:01.279 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:01.279 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:01.279 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:01.279 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:27:01.279 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:27:01.279 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:27:01.279 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:27:01.279 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:27:01.279 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:27:01.279 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:27:01.279 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:27:01.279 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:27:01.279 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=83227 00:27:01.279 Process raid pid: 83227 00:27:01.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:01.279 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83227' 00:27:01.279 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 83227 00:27:01.279 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 83227 ']' 00:27:01.279 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:01.279 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:01.279 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:01.279 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:01.279 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:27:01.279 12:58:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:01.279 [2024-12-05 12:58:43.682741] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:27:01.279 [2024-12-05 12:58:43.682998] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:01.279 [2024-12-05 12:58:43.841736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.537 [2024-12-05 12:58:43.945140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:01.537 [2024-12-05 12:58:44.084062] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:01.537 [2024-12-05 12:58:44.084141] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:02.104 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:02.104 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:27:02.104 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:27:02.104 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.104 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:02.104 [2024-12-05 12:58:44.544424] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:02.104 [2024-12-05 12:58:44.544480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:02.104 [2024-12-05 12:58:44.544501] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:02.104 [2024-12-05 12:58:44.544513] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:02.104 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.104 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:02.104 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:02.104 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:02.104 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:02.104 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:02.104 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:02.104 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:02.104 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:02.104 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:02.104 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:02.104 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:02.104 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:02.104 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.104 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:02.104 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.104 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:02.104 "name": "Existed_Raid", 00:27:02.104 "uuid": "abc1429c-9f60-4463-a712-a7eb55cb4ef8", 00:27:02.104 "strip_size_kb": 0, 00:27:02.104 "state": "configuring", 00:27:02.104 "raid_level": "raid1", 00:27:02.104 "superblock": true, 00:27:02.104 "num_base_bdevs": 2, 00:27:02.104 "num_base_bdevs_discovered": 0, 00:27:02.104 "num_base_bdevs_operational": 2, 00:27:02.104 "base_bdevs_list": [ 00:27:02.104 { 00:27:02.104 "name": "BaseBdev1", 00:27:02.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:02.104 "is_configured": false, 00:27:02.104 "data_offset": 0, 00:27:02.104 "data_size": 0 00:27:02.104 }, 00:27:02.104 { 00:27:02.104 "name": "BaseBdev2", 00:27:02.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:02.104 "is_configured": false, 00:27:02.104 "data_offset": 0, 00:27:02.104 "data_size": 0 00:27:02.104 } 00:27:02.104 ] 00:27:02.104 }' 00:27:02.104 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:02.104 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:02.364 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:02.364 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.364 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:02.364 [2024-12-05 12:58:44.876437] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:02.364 [2024-12-05 12:58:44.876468] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:27:02.364 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.364 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:27:02.364 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.364 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:02.364 [2024-12-05 12:58:44.884443] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:02.364 [2024-12-05 12:58:44.884479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:02.364 [2024-12-05 12:58:44.884487] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:02.364 [2024-12-05 12:58:44.884510] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:02.364 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.364 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:27:02.364 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.364 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:02.364 [2024-12-05 12:58:44.921358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:02.364 BaseBdev1 00:27:02.364 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.364 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:27:02.364 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:27:02.364 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:02.364 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:27:02.364 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:02.364 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:02.364 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:02.364 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.364 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:02.364 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.364 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:02.364 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.364 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:02.364 [ 00:27:02.364 { 00:27:02.364 "name": "BaseBdev1", 00:27:02.364 "aliases": [ 00:27:02.364 "9c3d7dd7-88ab-46dd-b79e-4e829850ce86" 00:27:02.364 ], 00:27:02.364 "product_name": "Malloc disk", 00:27:02.364 "block_size": 4096, 00:27:02.364 "num_blocks": 8192, 00:27:02.364 "uuid": "9c3d7dd7-88ab-46dd-b79e-4e829850ce86", 00:27:02.364 "assigned_rate_limits": { 00:27:02.364 "rw_ios_per_sec": 0, 00:27:02.364 "rw_mbytes_per_sec": 0, 00:27:02.364 "r_mbytes_per_sec": 0, 00:27:02.364 "w_mbytes_per_sec": 0 00:27:02.364 }, 00:27:02.364 "claimed": true, 00:27:02.364 "claim_type": "exclusive_write", 00:27:02.364 "zoned": false, 00:27:02.364 "supported_io_types": { 00:27:02.364 "read": true, 00:27:02.364 "write": true, 00:27:02.364 "unmap": true, 00:27:02.364 "flush": true, 00:27:02.364 "reset": true, 00:27:02.364 "nvme_admin": false, 00:27:02.364 "nvme_io": false, 00:27:02.365 "nvme_io_md": false, 00:27:02.365 "write_zeroes": true, 00:27:02.365 "zcopy": true, 00:27:02.365 "get_zone_info": false, 00:27:02.365 "zone_management": false, 00:27:02.365 "zone_append": false, 00:27:02.365 "compare": false, 00:27:02.365 "compare_and_write": false, 00:27:02.365 "abort": true, 00:27:02.365 "seek_hole": false, 00:27:02.365 "seek_data": false, 00:27:02.365 "copy": true, 00:27:02.365 "nvme_iov_md": false 00:27:02.365 }, 00:27:02.365 "memory_domains": [ 00:27:02.365 { 00:27:02.365 "dma_device_id": "system", 00:27:02.365 "dma_device_type": 1 00:27:02.365 }, 00:27:02.365 { 00:27:02.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:02.365 "dma_device_type": 2 00:27:02.365 } 00:27:02.365 ], 00:27:02.365 "driver_specific": {} 00:27:02.623 } 00:27:02.623 ] 00:27:02.623 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.623 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:27:02.623 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:02.623 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:02.623 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:02.623 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:02.623 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:02.623 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:02.623 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:02.623 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:02.623 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:02.623 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:02.623 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:02.623 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:02.623 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.623 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:02.623 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.623 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:02.623 "name": "Existed_Raid", 00:27:02.623 "uuid": "184811f2-74b0-4e12-a73b-fdcb019712af", 00:27:02.623 "strip_size_kb": 0, 00:27:02.623 "state": "configuring", 00:27:02.623 "raid_level": "raid1", 00:27:02.623 "superblock": true, 00:27:02.623 "num_base_bdevs": 2, 00:27:02.623 "num_base_bdevs_discovered": 1, 00:27:02.623 "num_base_bdevs_operational": 2, 00:27:02.623 "base_bdevs_list": [ 00:27:02.623 { 00:27:02.623 "name": "BaseBdev1", 00:27:02.623 "uuid": "9c3d7dd7-88ab-46dd-b79e-4e829850ce86", 00:27:02.623 "is_configured": true, 00:27:02.623 "data_offset": 256, 00:27:02.623 "data_size": 7936 00:27:02.623 }, 00:27:02.623 { 00:27:02.623 "name": "BaseBdev2", 00:27:02.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:02.623 "is_configured": false, 00:27:02.623 "data_offset": 0, 00:27:02.623 "data_size": 0 00:27:02.623 } 00:27:02.623 ] 00:27:02.623 }' 00:27:02.623 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:02.623 12:58:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:02.881 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:02.881 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.881 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:02.881 [2024-12-05 12:58:45.277469] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:02.881 [2024-12-05 12:58:45.277625] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:27:02.881 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.881 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:27:02.881 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.881 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:02.881 [2024-12-05 12:58:45.285538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:02.881 [2024-12-05 12:58:45.287464] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:02.881 [2024-12-05 12:58:45.287590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:02.881 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.881 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:27:02.881 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:02.881 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:02.881 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:02.881 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:02.881 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:02.881 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:02.881 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:02.881 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:02.881 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:02.881 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:02.881 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:02.881 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:02.881 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:02.881 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.881 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:02.881 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.881 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:02.881 "name": "Existed_Raid", 00:27:02.881 "uuid": "8c18abb3-42da-4fd4-8bfa-4122d3768196", 00:27:02.881 "strip_size_kb": 0, 00:27:02.881 "state": "configuring", 00:27:02.881 "raid_level": "raid1", 00:27:02.881 "superblock": true, 00:27:02.881 "num_base_bdevs": 2, 00:27:02.881 "num_base_bdevs_discovered": 1, 00:27:02.881 "num_base_bdevs_operational": 2, 00:27:02.881 "base_bdevs_list": [ 00:27:02.881 { 00:27:02.881 "name": "BaseBdev1", 00:27:02.881 "uuid": "9c3d7dd7-88ab-46dd-b79e-4e829850ce86", 00:27:02.881 "is_configured": true, 00:27:02.881 "data_offset": 256, 00:27:02.881 "data_size": 7936 00:27:02.881 }, 00:27:02.881 { 00:27:02.881 "name": "BaseBdev2", 00:27:02.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:02.881 "is_configured": false, 00:27:02.881 "data_offset": 0, 00:27:02.881 "data_size": 0 00:27:02.881 } 00:27:02.881 ] 00:27:02.881 }' 00:27:02.881 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:02.881 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:03.139 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:27:03.139 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.139 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:03.139 [2024-12-05 12:58:45.612257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:03.139 [2024-12-05 12:58:45.612475] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:03.139 [2024-12-05 12:58:45.612512] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:03.139 [2024-12-05 12:58:45.612769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:27:03.139 [2024-12-05 12:58:45.612911] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:03.139 [2024-12-05 12:58:45.612964] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raBaseBdev2 00:27:03.139 id_bdev 0x617000007e80 00:27:03.139 [2024-12-05 12:58:45.613207] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:03.139 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.139 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:27:03.139 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:27:03.139 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:03.139 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:27:03.139 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:03.139 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:03.139 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:03.139 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.139 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:03.139 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.139 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:03.139 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.139 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:03.139 [ 00:27:03.139 { 00:27:03.139 "name": "BaseBdev2", 00:27:03.139 "aliases": [ 00:27:03.139 "9f8c667b-ca39-49ca-a965-e0495c5b75ab" 00:27:03.139 ], 00:27:03.139 "product_name": "Malloc disk", 00:27:03.139 "block_size": 4096, 00:27:03.139 "num_blocks": 8192, 00:27:03.139 "uuid": "9f8c667b-ca39-49ca-a965-e0495c5b75ab", 00:27:03.139 "assigned_rate_limits": { 00:27:03.139 "rw_ios_per_sec": 0, 00:27:03.139 "rw_mbytes_per_sec": 0, 00:27:03.139 "r_mbytes_per_sec": 0, 00:27:03.139 "w_mbytes_per_sec": 0 00:27:03.139 }, 00:27:03.139 "claimed": true, 00:27:03.139 "claim_type": "exclusive_write", 00:27:03.139 "zoned": false, 00:27:03.139 "supported_io_types": { 00:27:03.139 "read": true, 00:27:03.139 "write": true, 00:27:03.139 "unmap": true, 00:27:03.139 "flush": true, 00:27:03.139 "reset": true, 00:27:03.139 "nvme_admin": false, 00:27:03.139 "nvme_io": false, 00:27:03.139 "nvme_io_md": false, 00:27:03.139 "write_zeroes": true, 00:27:03.139 "zcopy": true, 00:27:03.139 "get_zone_info": false, 00:27:03.139 "zone_management": false, 00:27:03.139 "zone_append": false, 00:27:03.139 "compare": false, 00:27:03.139 "compare_and_write": false, 00:27:03.139 "abort": true, 00:27:03.139 "seek_hole": false, 00:27:03.139 "seek_data": false, 00:27:03.139 "copy": true, 00:27:03.139 "nvme_iov_md": false 00:27:03.139 }, 00:27:03.139 "memory_domains": [ 00:27:03.139 { 00:27:03.139 "dma_device_id": "system", 00:27:03.139 "dma_device_type": 1 00:27:03.139 }, 00:27:03.139 { 00:27:03.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:03.139 "dma_device_type": 2 00:27:03.139 } 00:27:03.139 ], 00:27:03.139 "driver_specific": {} 00:27:03.139 } 00:27:03.139 ] 00:27:03.139 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.139 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:27:03.139 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:03.139 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:03.139 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:27:03.139 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:03.139 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:03.139 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:03.139 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:03.139 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:03.139 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:03.139 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:03.139 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:03.139 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:03.139 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:03.139 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:03.139 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.139 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:03.139 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.139 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:03.139 "name": "Existed_Raid", 00:27:03.139 "uuid": "8c18abb3-42da-4fd4-8bfa-4122d3768196", 00:27:03.139 "strip_size_kb": 0, 00:27:03.140 "state": "online", 00:27:03.140 "raid_level": "raid1", 00:27:03.140 "superblock": true, 00:27:03.140 "num_base_bdevs": 2, 00:27:03.140 "num_base_bdevs_discovered": 2, 00:27:03.140 "num_base_bdevs_operational": 2, 00:27:03.140 "base_bdevs_list": [ 00:27:03.140 { 00:27:03.140 "name": "BaseBdev1", 00:27:03.140 "uuid": "9c3d7dd7-88ab-46dd-b79e-4e829850ce86", 00:27:03.140 "is_configured": true, 00:27:03.140 "data_offset": 256, 00:27:03.140 "data_size": 7936 00:27:03.140 }, 00:27:03.140 { 00:27:03.140 "name": "BaseBdev2", 00:27:03.140 "uuid": "9f8c667b-ca39-49ca-a965-e0495c5b75ab", 00:27:03.140 "is_configured": true, 00:27:03.140 "data_offset": 256, 00:27:03.140 "data_size": 7936 00:27:03.140 } 00:27:03.140 ] 00:27:03.140 }' 00:27:03.140 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:03.140 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:03.423 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:27:03.423 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:27:03.423 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:03.423 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:03.423 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:27:03.423 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:03.423 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:03.423 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:27:03.423 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.423 12:58:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:03.423 [2024-12-05 12:58:45.996709] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:03.682 "name": "Existed_Raid", 00:27:03.682 "aliases": [ 00:27:03.682 "8c18abb3-42da-4fd4-8bfa-4122d3768196" 00:27:03.682 ], 00:27:03.682 "product_name": "Raid Volume", 00:27:03.682 "block_size": 4096, 00:27:03.682 "num_blocks": 7936, 00:27:03.682 "uuid": "8c18abb3-42da-4fd4-8bfa-4122d3768196", 00:27:03.682 "assigned_rate_limits": { 00:27:03.682 "rw_ios_per_sec": 0, 00:27:03.682 "rw_mbytes_per_sec": 0, 00:27:03.682 "r_mbytes_per_sec": 0, 00:27:03.682 "w_mbytes_per_sec": 0 00:27:03.682 }, 00:27:03.682 "claimed": false, 00:27:03.682 "zoned": false, 00:27:03.682 "supported_io_types": { 00:27:03.682 "read": true, 00:27:03.682 "write": true, 00:27:03.682 "unmap": false, 00:27:03.682 "flush": false, 00:27:03.682 "reset": true, 00:27:03.682 "nvme_admin": false, 00:27:03.682 "nvme_io": false, 00:27:03.682 "nvme_io_md": false, 00:27:03.682 "write_zeroes": true, 00:27:03.682 "zcopy": false, 00:27:03.682 "get_zone_info": false, 00:27:03.682 "zone_management": false, 00:27:03.682 "zone_append": false, 00:27:03.682 "compare": false, 00:27:03.682 "compare_and_write": false, 00:27:03.682 "abort": false, 00:27:03.682 "seek_hole": false, 00:27:03.682 "seek_data": false, 00:27:03.682 "copy": false, 00:27:03.682 "nvme_iov_md": false 00:27:03.682 }, 00:27:03.682 "memory_domains": [ 00:27:03.682 { 00:27:03.682 "dma_device_id": "system", 00:27:03.682 "dma_device_type": 1 00:27:03.682 }, 00:27:03.682 { 00:27:03.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:03.682 "dma_device_type": 2 00:27:03.682 }, 00:27:03.682 { 00:27:03.682 "dma_device_id": "system", 00:27:03.682 "dma_device_type": 1 00:27:03.682 }, 00:27:03.682 { 00:27:03.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:03.682 "dma_device_type": 2 00:27:03.682 } 00:27:03.682 ], 00:27:03.682 "driver_specific": { 00:27:03.682 "raid": { 00:27:03.682 "uuid": "8c18abb3-42da-4fd4-8bfa-4122d3768196", 00:27:03.682 "strip_size_kb": 0, 00:27:03.682 "state": "online", 00:27:03.682 "raid_level": "raid1", 00:27:03.682 "superblock": true, 00:27:03.682 "num_base_bdevs": 2, 00:27:03.682 "num_base_bdevs_discovered": 2, 00:27:03.682 "num_base_bdevs_operational": 2, 00:27:03.682 "base_bdevs_list": [ 00:27:03.682 { 00:27:03.682 "name": "BaseBdev1", 00:27:03.682 "uuid": "9c3d7dd7-88ab-46dd-b79e-4e829850ce86", 00:27:03.682 "is_configured": true, 00:27:03.682 "data_offset": 256, 00:27:03.682 "data_size": 7936 00:27:03.682 }, 00:27:03.682 { 00:27:03.682 "name": "BaseBdev2", 00:27:03.682 "uuid": "9f8c667b-ca39-49ca-a965-e0495c5b75ab", 00:27:03.682 "is_configured": true, 00:27:03.682 "data_offset": 256, 00:27:03.682 "data_size": 7936 00:27:03.682 } 00:27:03.682 ] 00:27:03.682 } 00:27:03.682 } 00:27:03.682 }' 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:27:03.682 BaseBdev2' 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:03.682 [2024-12-05 12:58:46.160465] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:03.682 "name": "Existed_Raid", 00:27:03.682 "uuid": "8c18abb3-42da-4fd4-8bfa-4122d3768196", 00:27:03.682 "strip_size_kb": 0, 00:27:03.682 "state": "online", 00:27:03.682 "raid_level": "raid1", 00:27:03.682 "superblock": true, 00:27:03.682 "num_base_bdevs": 2, 00:27:03.682 "num_base_bdevs_discovered": 1, 00:27:03.682 "num_base_bdevs_operational": 1, 00:27:03.682 "base_bdevs_list": [ 00:27:03.682 { 00:27:03.682 "name": null, 00:27:03.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:03.682 "is_configured": false, 00:27:03.682 "data_offset": 0, 00:27:03.682 "data_size": 7936 00:27:03.682 }, 00:27:03.682 { 00:27:03.682 "name": "BaseBdev2", 00:27:03.682 "uuid": "9f8c667b-ca39-49ca-a965-e0495c5b75ab", 00:27:03.682 "is_configured": true, 00:27:03.682 "data_offset": 256, 00:27:03.682 "data_size": 7936 00:27:03.682 } 00:27:03.682 ] 00:27:03.682 }' 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:03.682 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:04.275 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:27:04.275 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:04.275 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:04.275 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:04.275 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.275 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:04.275 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.275 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:04.275 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:04.275 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:27:04.275 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.275 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:04.275 [2024-12-05 12:58:46.591666] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:04.275 [2024-12-05 12:58:46.591761] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:04.275 [2024-12-05 12:58:46.652232] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:04.275 [2024-12-05 12:58:46.652283] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:04.275 [2024-12-05 12:58:46.652294] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:27:04.275 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.275 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:04.275 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:04.275 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:27:04.275 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:04.275 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.275 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:04.275 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.275 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:27:04.275 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:27:04.275 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:27:04.275 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 83227 00:27:04.275 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 83227 ']' 00:27:04.275 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 83227 00:27:04.275 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:27:04.275 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:04.275 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83227 00:27:04.275 killing process with pid 83227 00:27:04.275 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:04.275 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:04.275 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83227' 00:27:04.275 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 83227 00:27:04.275 [2024-12-05 12:58:46.705373] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:04.275 12:58:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 83227 00:27:04.275 [2024-12-05 12:58:46.715944] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:05.208 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:27:05.208 00:27:05.208 real 0m3.827s 00:27:05.208 user 0m5.525s 00:27:05.208 sys 0m0.581s 00:27:05.208 ************************************ 00:27:05.208 END TEST raid_state_function_test_sb_4k 00:27:05.208 ************************************ 00:27:05.208 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:05.208 12:58:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:05.208 12:58:47 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:27:05.208 12:58:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:05.208 12:58:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:05.208 12:58:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:05.208 ************************************ 00:27:05.208 START TEST raid_superblock_test_4k 00:27:05.208 ************************************ 00:27:05.208 12:58:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:27:05.208 12:58:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:27:05.208 12:58:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:27:05.208 12:58:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:27:05.208 12:58:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:27:05.208 12:58:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:27:05.208 12:58:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:27:05.208 12:58:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:27:05.208 12:58:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:27:05.208 12:58:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:27:05.208 12:58:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:27:05.208 12:58:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:27:05.208 12:58:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:27:05.208 12:58:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:27:05.208 12:58:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:27:05.208 12:58:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:27:05.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:05.208 12:58:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=83468 00:27:05.208 12:58:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 83468 00:27:05.208 12:58:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 83468 ']' 00:27:05.208 12:58:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:05.208 12:58:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:05.208 12:58:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:05.208 12:58:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:05.209 12:58:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:05.209 12:58:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:27:05.209 [2024-12-05 12:58:47.546728] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:27:05.209 [2024-12-05 12:58:47.547042] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83468 ] 00:27:05.209 [2024-12-05 12:58:47.707106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.466 [2024-12-05 12:58:47.809148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.466 [2024-12-05 12:58:47.946443] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:05.466 [2024-12-05 12:58:47.946485] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:06.033 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:06.033 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:27:06.033 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:27:06.033 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:06.033 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:27:06.033 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:27:06.033 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:27:06.033 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:06.033 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:06.033 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:06.033 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:27:06.033 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.033 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:06.033 malloc1 00:27:06.033 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.033 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:06.033 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.033 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:06.033 [2024-12-05 12:58:48.430638] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:06.033 [2024-12-05 12:58:48.430694] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:06.033 [2024-12-05 12:58:48.430716] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:27:06.033 [2024-12-05 12:58:48.430726] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:06.033 [2024-12-05 12:58:48.432912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:06.033 [2024-12-05 12:58:48.433056] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:06.033 pt1 00:27:06.033 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.033 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:06.033 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:06.033 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:27:06.033 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:27:06.033 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:27:06.033 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:06.033 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:06.033 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:06.033 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:27:06.033 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.033 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:06.033 malloc2 00:27:06.033 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.033 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:06.033 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.033 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:06.033 [2024-12-05 12:58:48.466921] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:06.033 [2024-12-05 12:58:48.466973] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:06.033 [2024-12-05 12:58:48.466997] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:06.033 [2024-12-05 12:58:48.467006] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:06.033 [2024-12-05 12:58:48.469155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:06.033 [2024-12-05 12:58:48.469188] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:06.033 pt2 00:27:06.033 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.033 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:06.033 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:06.033 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:27:06.033 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.034 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:06.034 [2024-12-05 12:58:48.474968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:06.034 [2024-12-05 12:58:48.476833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:06.034 [2024-12-05 12:58:48.476996] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:27:06.034 [2024-12-05 12:58:48.477011] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:06.034 [2024-12-05 12:58:48.477270] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:27:06.034 [2024-12-05 12:58:48.477409] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:27:06.034 [2024-12-05 12:58:48.477422] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:27:06.034 [2024-12-05 12:58:48.477587] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:06.034 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.034 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:06.034 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:06.034 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:06.034 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:06.034 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:06.034 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:06.034 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:06.034 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:06.034 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:06.034 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:06.034 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:06.034 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.034 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:06.034 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:06.034 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.034 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:06.034 "name": "raid_bdev1", 00:27:06.034 "uuid": "a818fde0-1751-4883-b3d8-8009057f0afd", 00:27:06.034 "strip_size_kb": 0, 00:27:06.034 "state": "online", 00:27:06.034 "raid_level": "raid1", 00:27:06.034 "superblock": true, 00:27:06.034 "num_base_bdevs": 2, 00:27:06.034 "num_base_bdevs_discovered": 2, 00:27:06.034 "num_base_bdevs_operational": 2, 00:27:06.034 "base_bdevs_list": [ 00:27:06.034 { 00:27:06.034 "name": "pt1", 00:27:06.034 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:06.034 "is_configured": true, 00:27:06.034 "data_offset": 256, 00:27:06.034 "data_size": 7936 00:27:06.034 }, 00:27:06.034 { 00:27:06.034 "name": "pt2", 00:27:06.034 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:06.034 "is_configured": true, 00:27:06.034 "data_offset": 256, 00:27:06.034 "data_size": 7936 00:27:06.034 } 00:27:06.034 ] 00:27:06.034 }' 00:27:06.034 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:06.034 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:06.292 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:27:06.292 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:27:06.292 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:06.292 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:06.292 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:27:06.292 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:06.292 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:06.292 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.292 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:06.292 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:06.292 [2024-12-05 12:58:48.791309] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:06.292 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.292 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:06.292 "name": "raid_bdev1", 00:27:06.292 "aliases": [ 00:27:06.292 "a818fde0-1751-4883-b3d8-8009057f0afd" 00:27:06.292 ], 00:27:06.292 "product_name": "Raid Volume", 00:27:06.292 "block_size": 4096, 00:27:06.292 "num_blocks": 7936, 00:27:06.292 "uuid": "a818fde0-1751-4883-b3d8-8009057f0afd", 00:27:06.292 "assigned_rate_limits": { 00:27:06.292 "rw_ios_per_sec": 0, 00:27:06.293 "rw_mbytes_per_sec": 0, 00:27:06.293 "r_mbytes_per_sec": 0, 00:27:06.293 "w_mbytes_per_sec": 0 00:27:06.293 }, 00:27:06.293 "claimed": false, 00:27:06.293 "zoned": false, 00:27:06.293 "supported_io_types": { 00:27:06.293 "read": true, 00:27:06.293 "write": true, 00:27:06.293 "unmap": false, 00:27:06.293 "flush": false, 00:27:06.293 "reset": true, 00:27:06.293 "nvme_admin": false, 00:27:06.293 "nvme_io": false, 00:27:06.293 "nvme_io_md": false, 00:27:06.293 "write_zeroes": true, 00:27:06.293 "zcopy": false, 00:27:06.293 "get_zone_info": false, 00:27:06.293 "zone_management": false, 00:27:06.293 "zone_append": false, 00:27:06.293 "compare": false, 00:27:06.293 "compare_and_write": false, 00:27:06.293 "abort": false, 00:27:06.293 "seek_hole": false, 00:27:06.293 "seek_data": false, 00:27:06.293 "copy": false, 00:27:06.293 "nvme_iov_md": false 00:27:06.293 }, 00:27:06.293 "memory_domains": [ 00:27:06.293 { 00:27:06.293 "dma_device_id": "system", 00:27:06.293 "dma_device_type": 1 00:27:06.293 }, 00:27:06.293 { 00:27:06.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:06.293 "dma_device_type": 2 00:27:06.293 }, 00:27:06.293 { 00:27:06.293 "dma_device_id": "system", 00:27:06.293 "dma_device_type": 1 00:27:06.293 }, 00:27:06.293 { 00:27:06.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:06.293 "dma_device_type": 2 00:27:06.293 } 00:27:06.293 ], 00:27:06.293 "driver_specific": { 00:27:06.293 "raid": { 00:27:06.293 "uuid": "a818fde0-1751-4883-b3d8-8009057f0afd", 00:27:06.293 "strip_size_kb": 0, 00:27:06.293 "state": "online", 00:27:06.293 "raid_level": "raid1", 00:27:06.293 "superblock": true, 00:27:06.293 "num_base_bdevs": 2, 00:27:06.293 "num_base_bdevs_discovered": 2, 00:27:06.293 "num_base_bdevs_operational": 2, 00:27:06.293 "base_bdevs_list": [ 00:27:06.293 { 00:27:06.293 "name": "pt1", 00:27:06.293 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:06.293 "is_configured": true, 00:27:06.293 "data_offset": 256, 00:27:06.293 "data_size": 7936 00:27:06.293 }, 00:27:06.293 { 00:27:06.293 "name": "pt2", 00:27:06.293 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:06.293 "is_configured": true, 00:27:06.293 "data_offset": 256, 00:27:06.293 "data_size": 7936 00:27:06.293 } 00:27:06.293 ] 00:27:06.293 } 00:27:06.293 } 00:27:06.293 }' 00:27:06.293 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:06.293 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:27:06.293 pt2' 00:27:06.293 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:06.552 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:27:06.552 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:06.552 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:27:06.552 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.552 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:06.552 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:06.552 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.552 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:27:06.552 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:27:06.552 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:06.552 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:27:06.552 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:06.552 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.552 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:06.552 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.552 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:27:06.552 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:27:06.552 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:06.552 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.552 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:06.552 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:27:06.552 [2024-12-05 12:58:48.955330] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:06.552 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.552 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a818fde0-1751-4883-b3d8-8009057f0afd 00:27:06.552 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z a818fde0-1751-4883-b3d8-8009057f0afd ']' 00:27:06.552 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:06.552 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.552 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:06.552 [2024-12-05 12:58:48.987033] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:06.552 [2024-12-05 12:58:48.987141] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:06.552 [2024-12-05 12:58:48.987262] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:06.552 [2024-12-05 12:58:48.987367] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:06.552 [2024-12-05 12:58:48.987446] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:27:06.552 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.552 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:06.552 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.552 12:58:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:06.552 12:58:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:27:06.552 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.552 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:06.553 [2024-12-05 12:58:49.079087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:27:06.553 [2024-12-05 12:58:49.081132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:27:06.553 [2024-12-05 12:58:49.081265] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:27:06.553 [2024-12-05 12:58:49.081384] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:27:06.553 [2024-12-05 12:58:49.081456] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:06.553 [2024-12-05 12:58:49.081670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:27:06.553 request: 00:27:06.553 { 00:27:06.553 "name": "raid_bdev1", 00:27:06.553 "raid_level": "raid1", 00:27:06.553 "base_bdevs": [ 00:27:06.553 "malloc1", 00:27:06.553 "malloc2" 00:27:06.553 ], 00:27:06.553 "superblock": false, 00:27:06.553 "method": "bdev_raid_create", 00:27:06.553 "req_id": 1 00:27:06.553 } 00:27:06.553 Got JSON-RPC error response 00:27:06.553 response: 00:27:06.553 { 00:27:06.553 "code": -17, 00:27:06.553 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:27:06.553 } 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:06.553 [2024-12-05 12:58:49.119076] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:06.553 [2024-12-05 12:58:49.119132] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:06.553 [2024-12-05 12:58:49.119150] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:27:06.553 [2024-12-05 12:58:49.119160] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:06.553 [2024-12-05 12:58:49.121360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:06.553 [2024-12-05 12:58:49.121397] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:06.553 [2024-12-05 12:58:49.121475] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:06.553 [2024-12-05 12:58:49.121549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:06.553 pt1 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.553 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:06.897 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.898 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:06.898 "name": "raid_bdev1", 00:27:06.898 "uuid": "a818fde0-1751-4883-b3d8-8009057f0afd", 00:27:06.898 "strip_size_kb": 0, 00:27:06.898 "state": "configuring", 00:27:06.898 "raid_level": "raid1", 00:27:06.898 "superblock": true, 00:27:06.898 "num_base_bdevs": 2, 00:27:06.898 "num_base_bdevs_discovered": 1, 00:27:06.898 "num_base_bdevs_operational": 2, 00:27:06.898 "base_bdevs_list": [ 00:27:06.898 { 00:27:06.898 "name": "pt1", 00:27:06.898 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:06.898 "is_configured": true, 00:27:06.898 "data_offset": 256, 00:27:06.898 "data_size": 7936 00:27:06.898 }, 00:27:06.898 { 00:27:06.898 "name": null, 00:27:06.898 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:06.898 "is_configured": false, 00:27:06.898 "data_offset": 256, 00:27:06.898 "data_size": 7936 00:27:06.898 } 00:27:06.898 ] 00:27:06.898 }' 00:27:06.898 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:06.898 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:06.898 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:27:06.898 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:27:06.898 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:06.898 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:06.898 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.898 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:06.898 [2024-12-05 12:58:49.431167] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:06.898 [2024-12-05 12:58:49.431350] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:06.898 [2024-12-05 12:58:49.431375] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:27:06.898 [2024-12-05 12:58:49.431387] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:06.898 [2024-12-05 12:58:49.431825] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:06.898 [2024-12-05 12:58:49.431851] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:06.898 [2024-12-05 12:58:49.431922] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:06.898 [2024-12-05 12:58:49.431946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:06.898 [2024-12-05 12:58:49.432052] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:06.898 [2024-12-05 12:58:49.432069] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:06.898 [2024-12-05 12:58:49.432308] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:27:06.898 [2024-12-05 12:58:49.432446] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:06.898 [2024-12-05 12:58:49.432455] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:27:06.898 [2024-12-05 12:58:49.432610] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:06.898 pt2 00:27:06.898 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.898 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:27:06.898 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:06.898 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:06.898 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:06.898 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:06.898 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:06.898 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:06.898 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:06.898 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:06.898 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:06.898 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:06.898 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:06.898 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:06.898 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:06.898 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.898 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:06.898 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.157 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:07.157 "name": "raid_bdev1", 00:27:07.157 "uuid": "a818fde0-1751-4883-b3d8-8009057f0afd", 00:27:07.157 "strip_size_kb": 0, 00:27:07.157 "state": "online", 00:27:07.157 "raid_level": "raid1", 00:27:07.157 "superblock": true, 00:27:07.157 "num_base_bdevs": 2, 00:27:07.157 "num_base_bdevs_discovered": 2, 00:27:07.157 "num_base_bdevs_operational": 2, 00:27:07.157 "base_bdevs_list": [ 00:27:07.157 { 00:27:07.157 "name": "pt1", 00:27:07.158 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:07.158 "is_configured": true, 00:27:07.158 "data_offset": 256, 00:27:07.158 "data_size": 7936 00:27:07.158 }, 00:27:07.158 { 00:27:07.158 "name": "pt2", 00:27:07.158 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:07.158 "is_configured": true, 00:27:07.158 "data_offset": 256, 00:27:07.158 "data_size": 7936 00:27:07.158 } 00:27:07.158 ] 00:27:07.158 }' 00:27:07.158 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:07.158 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:07.158 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:27:07.158 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:27:07.158 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:07.158 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:07.158 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:27:07.158 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:07.158 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:07.158 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:07.158 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.158 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:07.418 [2024-12-05 12:58:49.743521] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:07.418 "name": "raid_bdev1", 00:27:07.418 "aliases": [ 00:27:07.418 "a818fde0-1751-4883-b3d8-8009057f0afd" 00:27:07.418 ], 00:27:07.418 "product_name": "Raid Volume", 00:27:07.418 "block_size": 4096, 00:27:07.418 "num_blocks": 7936, 00:27:07.418 "uuid": "a818fde0-1751-4883-b3d8-8009057f0afd", 00:27:07.418 "assigned_rate_limits": { 00:27:07.418 "rw_ios_per_sec": 0, 00:27:07.418 "rw_mbytes_per_sec": 0, 00:27:07.418 "r_mbytes_per_sec": 0, 00:27:07.418 "w_mbytes_per_sec": 0 00:27:07.418 }, 00:27:07.418 "claimed": false, 00:27:07.418 "zoned": false, 00:27:07.418 "supported_io_types": { 00:27:07.418 "read": true, 00:27:07.418 "write": true, 00:27:07.418 "unmap": false, 00:27:07.418 "flush": false, 00:27:07.418 "reset": true, 00:27:07.418 "nvme_admin": false, 00:27:07.418 "nvme_io": false, 00:27:07.418 "nvme_io_md": false, 00:27:07.418 "write_zeroes": true, 00:27:07.418 "zcopy": false, 00:27:07.418 "get_zone_info": false, 00:27:07.418 "zone_management": false, 00:27:07.418 "zone_append": false, 00:27:07.418 "compare": false, 00:27:07.418 "compare_and_write": false, 00:27:07.418 "abort": false, 00:27:07.418 "seek_hole": false, 00:27:07.418 "seek_data": false, 00:27:07.418 "copy": false, 00:27:07.418 "nvme_iov_md": false 00:27:07.418 }, 00:27:07.418 "memory_domains": [ 00:27:07.418 { 00:27:07.418 "dma_device_id": "system", 00:27:07.418 "dma_device_type": 1 00:27:07.418 }, 00:27:07.418 { 00:27:07.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:07.418 "dma_device_type": 2 00:27:07.418 }, 00:27:07.418 { 00:27:07.418 "dma_device_id": "system", 00:27:07.418 "dma_device_type": 1 00:27:07.418 }, 00:27:07.418 { 00:27:07.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:07.418 "dma_device_type": 2 00:27:07.418 } 00:27:07.418 ], 00:27:07.418 "driver_specific": { 00:27:07.418 "raid": { 00:27:07.418 "uuid": "a818fde0-1751-4883-b3d8-8009057f0afd", 00:27:07.418 "strip_size_kb": 0, 00:27:07.418 "state": "online", 00:27:07.418 "raid_level": "raid1", 00:27:07.418 "superblock": true, 00:27:07.418 "num_base_bdevs": 2, 00:27:07.418 "num_base_bdevs_discovered": 2, 00:27:07.418 "num_base_bdevs_operational": 2, 00:27:07.418 "base_bdevs_list": [ 00:27:07.418 { 00:27:07.418 "name": "pt1", 00:27:07.418 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:07.418 "is_configured": true, 00:27:07.418 "data_offset": 256, 00:27:07.418 "data_size": 7936 00:27:07.418 }, 00:27:07.418 { 00:27:07.418 "name": "pt2", 00:27:07.418 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:07.418 "is_configured": true, 00:27:07.418 "data_offset": 256, 00:27:07.418 "data_size": 7936 00:27:07.418 } 00:27:07.418 ] 00:27:07.418 } 00:27:07.418 } 00:27:07.418 }' 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:27:07.418 pt2' 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:27:07.418 [2024-12-05 12:58:49.927549] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' a818fde0-1751-4883-b3d8-8009057f0afd '!=' a818fde0-1751-4883-b3d8-8009057f0afd ']' 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:07.418 [2024-12-05 12:58:49.959310] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:07.418 12:58:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:07.419 12:58:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.677 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:07.677 "name": "raid_bdev1", 00:27:07.677 "uuid": "a818fde0-1751-4883-b3d8-8009057f0afd", 00:27:07.677 "strip_size_kb": 0, 00:27:07.677 "state": "online", 00:27:07.677 "raid_level": "raid1", 00:27:07.677 "superblock": true, 00:27:07.677 "num_base_bdevs": 2, 00:27:07.677 "num_base_bdevs_discovered": 1, 00:27:07.677 "num_base_bdevs_operational": 1, 00:27:07.677 "base_bdevs_list": [ 00:27:07.677 { 00:27:07.677 "name": null, 00:27:07.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:07.677 "is_configured": false, 00:27:07.677 "data_offset": 0, 00:27:07.677 "data_size": 7936 00:27:07.677 }, 00:27:07.677 { 00:27:07.677 "name": "pt2", 00:27:07.677 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:07.677 "is_configured": true, 00:27:07.677 "data_offset": 256, 00:27:07.677 "data_size": 7936 00:27:07.677 } 00:27:07.677 ] 00:27:07.677 }' 00:27:07.677 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:07.677 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:07.935 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:07.935 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.935 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:07.935 [2024-12-05 12:58:50.327379] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:07.935 [2024-12-05 12:58:50.327407] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:07.935 [2024-12-05 12:58:50.327477] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:07.935 [2024-12-05 12:58:50.327547] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:07.935 [2024-12-05 12:58:50.327560] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:27:07.935 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.935 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:07.935 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.935 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:07.935 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:27:07.935 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.935 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:27:07.935 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:27:07.935 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:27:07.935 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:27:07.935 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:27:07.935 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.935 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:07.935 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.935 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:27:07.935 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:27:07.935 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:27:07.935 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:27:07.935 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:27:07.935 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:07.935 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.935 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:07.935 [2024-12-05 12:58:50.379409] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:07.935 [2024-12-05 12:58:50.379485] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:07.935 [2024-12-05 12:58:50.379521] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:27:07.935 [2024-12-05 12:58:50.379537] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:07.935 [2024-12-05 12:58:50.381870] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:07.935 [2024-12-05 12:58:50.382015] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:07.935 [2024-12-05 12:58:50.382103] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:07.935 [2024-12-05 12:58:50.382149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:07.935 [2024-12-05 12:58:50.382246] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:27:07.935 [2024-12-05 12:58:50.382260] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:07.935 [2024-12-05 12:58:50.382528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:07.935 [2024-12-05 12:58:50.382669] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:27:07.935 [2024-12-05 12:58:50.382678] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:27:07.935 [2024-12-05 12:58:50.382811] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:07.935 pt2 00:27:07.935 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.935 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:07.935 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:07.935 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:07.935 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:07.935 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:07.935 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:07.936 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:07.936 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:07.936 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:07.936 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:07.936 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:07.936 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.936 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:07.936 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:07.936 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.936 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:07.936 "name": "raid_bdev1", 00:27:07.936 "uuid": "a818fde0-1751-4883-b3d8-8009057f0afd", 00:27:07.936 "strip_size_kb": 0, 00:27:07.936 "state": "online", 00:27:07.936 "raid_level": "raid1", 00:27:07.936 "superblock": true, 00:27:07.936 "num_base_bdevs": 2, 00:27:07.936 "num_base_bdevs_discovered": 1, 00:27:07.936 "num_base_bdevs_operational": 1, 00:27:07.936 "base_bdevs_list": [ 00:27:07.936 { 00:27:07.936 "name": null, 00:27:07.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:07.936 "is_configured": false, 00:27:07.936 "data_offset": 256, 00:27:07.936 "data_size": 7936 00:27:07.936 }, 00:27:07.936 { 00:27:07.936 "name": "pt2", 00:27:07.936 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:07.936 "is_configured": true, 00:27:07.936 "data_offset": 256, 00:27:07.936 "data_size": 7936 00:27:07.936 } 00:27:07.936 ] 00:27:07.936 }' 00:27:07.936 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:07.936 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:08.193 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:08.193 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.193 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:08.193 [2024-12-05 12:58:50.727444] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:08.193 [2024-12-05 12:58:50.727483] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:08.193 [2024-12-05 12:58:50.727580] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:08.194 [2024-12-05 12:58:50.727639] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:08.194 [2024-12-05 12:58:50.727650] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:27:08.194 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.194 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:27:08.194 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:08.194 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.194 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:08.194 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.451 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:27:08.451 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:27:08.451 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:27:08.451 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:08.451 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.451 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:08.451 [2024-12-05 12:58:50.783486] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:08.451 [2024-12-05 12:58:50.783697] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:08.451 [2024-12-05 12:58:50.783728] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:27:08.451 [2024-12-05 12:58:50.783739] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:08.451 [2024-12-05 12:58:50.786126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:08.451 [2024-12-05 12:58:50.786169] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:08.451 [2024-12-05 12:58:50.786261] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:08.451 [2024-12-05 12:58:50.786304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:08.451 [2024-12-05 12:58:50.786440] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:27:08.451 [2024-12-05 12:58:50.786451] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:08.451 [2024-12-05 12:58:50.786467] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:27:08.451 [2024-12-05 12:58:50.786530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:08.451 [2024-12-05 12:58:50.786605] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:27:08.451 [2024-12-05 12:58:50.786618] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:08.451 [2024-12-05 12:58:50.786899] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:27:08.452 [2024-12-05 12:58:50.787032] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:27:08.452 [2024-12-05 12:58:50.787043] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:27:08.452 [2024-12-05 12:58:50.787182] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:08.452 pt1 00:27:08.452 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.452 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:27:08.452 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:08.452 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:08.452 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:08.452 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:08.452 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:08.452 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:08.452 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:08.452 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:08.452 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:08.452 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:08.452 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:08.452 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:08.452 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.452 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:08.452 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.452 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:08.452 "name": "raid_bdev1", 00:27:08.452 "uuid": "a818fde0-1751-4883-b3d8-8009057f0afd", 00:27:08.452 "strip_size_kb": 0, 00:27:08.452 "state": "online", 00:27:08.452 "raid_level": "raid1", 00:27:08.452 "superblock": true, 00:27:08.452 "num_base_bdevs": 2, 00:27:08.452 "num_base_bdevs_discovered": 1, 00:27:08.452 "num_base_bdevs_operational": 1, 00:27:08.452 "base_bdevs_list": [ 00:27:08.452 { 00:27:08.452 "name": null, 00:27:08.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:08.452 "is_configured": false, 00:27:08.452 "data_offset": 256, 00:27:08.452 "data_size": 7936 00:27:08.452 }, 00:27:08.452 { 00:27:08.452 "name": "pt2", 00:27:08.452 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:08.452 "is_configured": true, 00:27:08.452 "data_offset": 256, 00:27:08.452 "data_size": 7936 00:27:08.452 } 00:27:08.452 ] 00:27:08.452 }' 00:27:08.452 12:58:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:08.452 12:58:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:08.710 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:27:08.710 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:27:08.710 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.710 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:08.710 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.710 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:27:08.710 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:08.710 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.710 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:08.710 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:27:08.710 [2024-12-05 12:58:51.135788] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:08.710 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.710 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' a818fde0-1751-4883-b3d8-8009057f0afd '!=' a818fde0-1751-4883-b3d8-8009057f0afd ']' 00:27:08.710 12:58:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 83468 00:27:08.710 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 83468 ']' 00:27:08.710 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 83468 00:27:08.710 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:27:08.710 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:08.710 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83468 00:27:08.710 killing process with pid 83468 00:27:08.710 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:08.710 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:08.710 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83468' 00:27:08.710 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 83468 00:27:08.710 12:58:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 83468 00:27:08.710 [2024-12-05 12:58:51.188137] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:08.710 [2024-12-05 12:58:51.188221] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:08.710 [2024-12-05 12:58:51.188267] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:08.710 [2024-12-05 12:58:51.188280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:27:08.968 [2024-12-05 12:58:51.318535] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:09.533 ************************************ 00:27:09.533 END TEST raid_superblock_test_4k 00:27:09.533 ************************************ 00:27:09.533 12:58:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:27:09.533 00:27:09.533 real 0m4.559s 00:27:09.533 user 0m6.917s 00:27:09.533 sys 0m0.723s 00:27:09.533 12:58:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:09.533 12:58:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:27:09.533 12:58:52 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:27:09.533 12:58:52 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:27:09.533 12:58:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:27:09.533 12:58:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:09.533 12:58:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:09.533 ************************************ 00:27:09.533 START TEST raid_rebuild_test_sb_4k 00:27:09.533 ************************************ 00:27:09.533 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:27:09.533 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:27:09.533 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:27:09.533 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:27:09.533 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:27:09.533 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:27:09.533 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:27:09.533 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:09.533 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:27:09.533 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:27:09.533 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:09.533 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:27:09.533 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:27:09.533 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:09.533 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:09.533 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:27:09.533 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:27:09.533 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:27:09.533 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:27:09.533 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:27:09.533 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:27:09.533 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:27:09.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:09.533 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:27:09.533 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:27:09.533 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:27:09.533 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=83774 00:27:09.533 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 83774 00:27:09.533 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 83774 ']' 00:27:09.533 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:09.533 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:09.533 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:09.533 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:09.533 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:09.533 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:09.790 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:09.790 Zero copy mechanism will not be used. 00:27:09.790 [2024-12-05 12:58:52.150368] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:27:09.790 [2024-12-05 12:58:52.150511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83774 ] 00:27:09.790 [2024-12-05 12:58:52.308302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.048 [2024-12-05 12:58:52.408453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.048 [2024-12-05 12:58:52.544770] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:10.048 [2024-12-05 12:58:52.544822] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:10.713 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:10.713 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:27:10.713 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:27:10.713 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:27:10.713 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.713 12:58:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:10.713 BaseBdev1_malloc 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:10.713 [2024-12-05 12:58:53.022578] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:10.713 [2024-12-05 12:58:53.022640] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:10.713 [2024-12-05 12:58:53.022663] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:27:10.713 [2024-12-05 12:58:53.022675] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:10.713 [2024-12-05 12:58:53.024790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:10.713 [2024-12-05 12:58:53.024827] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:10.713 BaseBdev1 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:10.713 BaseBdev2_malloc 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:10.713 [2024-12-05 12:58:53.058179] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:10.713 [2024-12-05 12:58:53.058235] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:10.713 [2024-12-05 12:58:53.058254] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:10.713 [2024-12-05 12:58:53.058264] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:10.713 [2024-12-05 12:58:53.060360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:10.713 [2024-12-05 12:58:53.060537] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:10.713 BaseBdev2 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:10.713 spare_malloc 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:10.713 spare_delay 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:10.713 [2024-12-05 12:58:53.115297] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:10.713 [2024-12-05 12:58:53.115358] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:10.713 [2024-12-05 12:58:53.115377] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:27:10.713 [2024-12-05 12:58:53.115387] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:10.713 [2024-12-05 12:58:53.117563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:10.713 [2024-12-05 12:58:53.117733] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:10.713 spare 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:10.713 [2024-12-05 12:58:53.123347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:10.713 [2024-12-05 12:58:53.125212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:10.713 [2024-12-05 12:58:53.125384] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:27:10.713 [2024-12-05 12:58:53.125398] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:10.713 [2024-12-05 12:58:53.125663] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:27:10.713 [2024-12-05 12:58:53.125816] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:27:10.713 [2024-12-05 12:58:53.125825] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:27:10.713 [2024-12-05 12:58:53.125973] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.713 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:10.713 "name": "raid_bdev1", 00:27:10.713 "uuid": "10970a82-e844-4133-91ba-2fc28797bb4c", 00:27:10.713 "strip_size_kb": 0, 00:27:10.713 "state": "online", 00:27:10.713 "raid_level": "raid1", 00:27:10.713 "superblock": true, 00:27:10.713 "num_base_bdevs": 2, 00:27:10.713 "num_base_bdevs_discovered": 2, 00:27:10.714 "num_base_bdevs_operational": 2, 00:27:10.714 "base_bdevs_list": [ 00:27:10.714 { 00:27:10.714 "name": "BaseBdev1", 00:27:10.714 "uuid": "0cfdda4d-45fb-59ec-bba3-f1cf85b09e2b", 00:27:10.714 "is_configured": true, 00:27:10.714 "data_offset": 256, 00:27:10.714 "data_size": 7936 00:27:10.714 }, 00:27:10.714 { 00:27:10.714 "name": "BaseBdev2", 00:27:10.714 "uuid": "5ea7cecd-ccb2-5d27-9e4a-045d6dc9b4f0", 00:27:10.714 "is_configured": true, 00:27:10.714 "data_offset": 256, 00:27:10.714 "data_size": 7936 00:27:10.714 } 00:27:10.714 ] 00:27:10.714 }' 00:27:10.714 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:10.714 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:10.972 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:27:10.972 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:10.972 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.972 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:10.972 [2024-12-05 12:58:53.459714] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:10.972 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.972 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:27:10.972 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:10.972 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:10.972 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.972 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:10.972 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.972 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:27:10.972 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:27:10.972 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:27:10.972 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:27:10.972 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:27:10.972 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:27:10.972 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:27:10.972 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:10.972 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:10.972 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:10.972 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:27:10.972 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:10.972 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:10.972 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:27:11.230 [2024-12-05 12:58:53.707531] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:27:11.230 /dev/nbd0 00:27:11.230 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:11.230 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:11.230 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:27:11.230 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:27:11.230 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:11.230 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:11.230 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:27:11.230 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:27:11.230 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:11.230 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:11.230 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:11.230 1+0 records in 00:27:11.230 1+0 records out 00:27:11.230 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371508 s, 11.0 MB/s 00:27:11.230 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:11.230 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:27:11.230 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:11.230 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:11.230 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:27:11.230 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:11.230 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:11.230 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:27:11.230 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:27:11.230 12:58:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:27:12.163 7936+0 records in 00:27:12.163 7936+0 records out 00:27:12.163 32505856 bytes (33 MB, 31 MiB) copied, 0.680941 s, 47.7 MB/s 00:27:12.163 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:27:12.163 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:27:12.163 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:12.163 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:12.163 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:27:12.163 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:12.163 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:27:12.163 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:12.163 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:12.163 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:12.163 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:12.163 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:12.163 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:12.163 [2024-12-05 12:58:54.665174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:12.163 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:27:12.163 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:27:12.163 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:27:12.163 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.163 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:12.163 [2024-12-05 12:58:54.673254] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:12.163 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.163 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:12.163 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:12.163 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:12.163 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:12.163 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:12.163 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:12.163 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:12.163 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:12.163 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:12.163 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:12.163 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:12.163 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:12.163 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.163 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:12.163 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.163 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:12.163 "name": "raid_bdev1", 00:27:12.163 "uuid": "10970a82-e844-4133-91ba-2fc28797bb4c", 00:27:12.163 "strip_size_kb": 0, 00:27:12.163 "state": "online", 00:27:12.163 "raid_level": "raid1", 00:27:12.163 "superblock": true, 00:27:12.163 "num_base_bdevs": 2, 00:27:12.163 "num_base_bdevs_discovered": 1, 00:27:12.163 "num_base_bdevs_operational": 1, 00:27:12.163 "base_bdevs_list": [ 00:27:12.163 { 00:27:12.163 "name": null, 00:27:12.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:12.163 "is_configured": false, 00:27:12.163 "data_offset": 0, 00:27:12.163 "data_size": 7936 00:27:12.163 }, 00:27:12.163 { 00:27:12.163 "name": "BaseBdev2", 00:27:12.163 "uuid": "5ea7cecd-ccb2-5d27-9e4a-045d6dc9b4f0", 00:27:12.163 "is_configured": true, 00:27:12.163 "data_offset": 256, 00:27:12.163 "data_size": 7936 00:27:12.163 } 00:27:12.163 ] 00:27:12.163 }' 00:27:12.163 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:12.163 12:58:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:12.733 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:27:12.733 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.733 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:12.733 [2024-12-05 12:58:55.013333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:12.733 [2024-12-05 12:58:55.023052] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:27:12.733 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.733 12:58:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:27:12.733 [2024-12-05 12:58:55.024804] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:13.668 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:13.668 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:13.668 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:13.668 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:13.668 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:13.668 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:13.668 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:13.668 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.668 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:13.668 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.668 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:13.668 "name": "raid_bdev1", 00:27:13.668 "uuid": "10970a82-e844-4133-91ba-2fc28797bb4c", 00:27:13.668 "strip_size_kb": 0, 00:27:13.668 "state": "online", 00:27:13.668 "raid_level": "raid1", 00:27:13.668 "superblock": true, 00:27:13.668 "num_base_bdevs": 2, 00:27:13.668 "num_base_bdevs_discovered": 2, 00:27:13.668 "num_base_bdevs_operational": 2, 00:27:13.668 "process": { 00:27:13.668 "type": "rebuild", 00:27:13.668 "target": "spare", 00:27:13.668 "progress": { 00:27:13.668 "blocks": 2560, 00:27:13.668 "percent": 32 00:27:13.668 } 00:27:13.668 }, 00:27:13.668 "base_bdevs_list": [ 00:27:13.668 { 00:27:13.668 "name": "spare", 00:27:13.668 "uuid": "91487cda-7c9c-5835-90da-c693ed43d55b", 00:27:13.668 "is_configured": true, 00:27:13.668 "data_offset": 256, 00:27:13.668 "data_size": 7936 00:27:13.668 }, 00:27:13.668 { 00:27:13.668 "name": "BaseBdev2", 00:27:13.668 "uuid": "5ea7cecd-ccb2-5d27-9e4a-045d6dc9b4f0", 00:27:13.668 "is_configured": true, 00:27:13.668 "data_offset": 256, 00:27:13.668 "data_size": 7936 00:27:13.668 } 00:27:13.668 ] 00:27:13.668 }' 00:27:13.668 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:13.668 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:13.668 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:13.668 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:13.668 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:27:13.668 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.668 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:13.668 [2024-12-05 12:58:56.130687] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:13.668 [2024-12-05 12:58:56.230398] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:13.668 [2024-12-05 12:58:56.230674] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:13.668 [2024-12-05 12:58:56.230691] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:13.668 [2024-12-05 12:58:56.230699] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:13.927 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.927 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:13.927 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:13.927 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:13.927 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:13.927 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:13.927 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:13.927 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:13.927 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:13.927 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:13.927 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:13.927 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:13.927 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.927 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:13.927 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:13.927 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.927 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:13.927 "name": "raid_bdev1", 00:27:13.927 "uuid": "10970a82-e844-4133-91ba-2fc28797bb4c", 00:27:13.927 "strip_size_kb": 0, 00:27:13.927 "state": "online", 00:27:13.927 "raid_level": "raid1", 00:27:13.927 "superblock": true, 00:27:13.927 "num_base_bdevs": 2, 00:27:13.927 "num_base_bdevs_discovered": 1, 00:27:13.927 "num_base_bdevs_operational": 1, 00:27:13.927 "base_bdevs_list": [ 00:27:13.927 { 00:27:13.927 "name": null, 00:27:13.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:13.927 "is_configured": false, 00:27:13.927 "data_offset": 0, 00:27:13.927 "data_size": 7936 00:27:13.927 }, 00:27:13.927 { 00:27:13.927 "name": "BaseBdev2", 00:27:13.927 "uuid": "5ea7cecd-ccb2-5d27-9e4a-045d6dc9b4f0", 00:27:13.927 "is_configured": true, 00:27:13.927 "data_offset": 256, 00:27:13.927 "data_size": 7936 00:27:13.927 } 00:27:13.927 ] 00:27:13.927 }' 00:27:13.927 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:13.927 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:14.186 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:14.186 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:14.186 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:14.186 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:14.186 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:14.186 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:14.186 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:14.186 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.186 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:14.186 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.186 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:14.186 "name": "raid_bdev1", 00:27:14.186 "uuid": "10970a82-e844-4133-91ba-2fc28797bb4c", 00:27:14.186 "strip_size_kb": 0, 00:27:14.186 "state": "online", 00:27:14.186 "raid_level": "raid1", 00:27:14.186 "superblock": true, 00:27:14.186 "num_base_bdevs": 2, 00:27:14.186 "num_base_bdevs_discovered": 1, 00:27:14.186 "num_base_bdevs_operational": 1, 00:27:14.186 "base_bdevs_list": [ 00:27:14.186 { 00:27:14.186 "name": null, 00:27:14.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:14.186 "is_configured": false, 00:27:14.186 "data_offset": 0, 00:27:14.186 "data_size": 7936 00:27:14.186 }, 00:27:14.186 { 00:27:14.186 "name": "BaseBdev2", 00:27:14.186 "uuid": "5ea7cecd-ccb2-5d27-9e4a-045d6dc9b4f0", 00:27:14.186 "is_configured": true, 00:27:14.186 "data_offset": 256, 00:27:14.186 "data_size": 7936 00:27:14.186 } 00:27:14.186 ] 00:27:14.186 }' 00:27:14.186 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:14.186 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:14.186 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:14.186 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:14.186 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:27:14.186 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.186 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:14.186 [2024-12-05 12:58:56.685592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:14.186 [2024-12-05 12:58:56.694575] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:27:14.186 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.186 12:58:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:27:14.186 [2024-12-05 12:58:56.696172] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:15.120 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:15.121 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:15.121 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:15.121 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:15.121 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:15.121 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:15.121 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:15.121 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.121 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:15.379 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.379 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:15.379 "name": "raid_bdev1", 00:27:15.379 "uuid": "10970a82-e844-4133-91ba-2fc28797bb4c", 00:27:15.379 "strip_size_kb": 0, 00:27:15.379 "state": "online", 00:27:15.379 "raid_level": "raid1", 00:27:15.379 "superblock": true, 00:27:15.379 "num_base_bdevs": 2, 00:27:15.379 "num_base_bdevs_discovered": 2, 00:27:15.379 "num_base_bdevs_operational": 2, 00:27:15.379 "process": { 00:27:15.379 "type": "rebuild", 00:27:15.379 "target": "spare", 00:27:15.379 "progress": { 00:27:15.379 "blocks": 2560, 00:27:15.379 "percent": 32 00:27:15.379 } 00:27:15.379 }, 00:27:15.379 "base_bdevs_list": [ 00:27:15.379 { 00:27:15.379 "name": "spare", 00:27:15.379 "uuid": "91487cda-7c9c-5835-90da-c693ed43d55b", 00:27:15.379 "is_configured": true, 00:27:15.379 "data_offset": 256, 00:27:15.379 "data_size": 7936 00:27:15.379 }, 00:27:15.379 { 00:27:15.379 "name": "BaseBdev2", 00:27:15.379 "uuid": "5ea7cecd-ccb2-5d27-9e4a-045d6dc9b4f0", 00:27:15.379 "is_configured": true, 00:27:15.379 "data_offset": 256, 00:27:15.379 "data_size": 7936 00:27:15.379 } 00:27:15.379 ] 00:27:15.379 }' 00:27:15.379 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:15.380 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:15.380 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:15.380 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:15.380 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:27:15.380 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:27:15.380 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:27:15.380 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:27:15.380 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:27:15.380 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:27:15.380 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=530 00:27:15.380 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:15.380 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:15.380 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:15.380 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:15.380 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:15.380 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:15.380 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:15.380 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.380 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:15.380 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:15.380 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.380 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:15.380 "name": "raid_bdev1", 00:27:15.380 "uuid": "10970a82-e844-4133-91ba-2fc28797bb4c", 00:27:15.380 "strip_size_kb": 0, 00:27:15.380 "state": "online", 00:27:15.380 "raid_level": "raid1", 00:27:15.380 "superblock": true, 00:27:15.380 "num_base_bdevs": 2, 00:27:15.380 "num_base_bdevs_discovered": 2, 00:27:15.380 "num_base_bdevs_operational": 2, 00:27:15.380 "process": { 00:27:15.380 "type": "rebuild", 00:27:15.380 "target": "spare", 00:27:15.380 "progress": { 00:27:15.380 "blocks": 2816, 00:27:15.380 "percent": 35 00:27:15.380 } 00:27:15.380 }, 00:27:15.380 "base_bdevs_list": [ 00:27:15.380 { 00:27:15.380 "name": "spare", 00:27:15.380 "uuid": "91487cda-7c9c-5835-90da-c693ed43d55b", 00:27:15.380 "is_configured": true, 00:27:15.380 "data_offset": 256, 00:27:15.380 "data_size": 7936 00:27:15.380 }, 00:27:15.380 { 00:27:15.380 "name": "BaseBdev2", 00:27:15.380 "uuid": "5ea7cecd-ccb2-5d27-9e4a-045d6dc9b4f0", 00:27:15.380 "is_configured": true, 00:27:15.380 "data_offset": 256, 00:27:15.380 "data_size": 7936 00:27:15.380 } 00:27:15.380 ] 00:27:15.380 }' 00:27:15.380 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:15.380 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:15.380 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:15.380 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:15.380 12:58:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:16.317 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:16.317 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:16.317 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:16.317 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:16.317 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:16.317 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:16.317 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:16.317 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.317 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:16.317 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:16.575 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.575 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:16.575 "name": "raid_bdev1", 00:27:16.575 "uuid": "10970a82-e844-4133-91ba-2fc28797bb4c", 00:27:16.575 "strip_size_kb": 0, 00:27:16.575 "state": "online", 00:27:16.575 "raid_level": "raid1", 00:27:16.575 "superblock": true, 00:27:16.575 "num_base_bdevs": 2, 00:27:16.575 "num_base_bdevs_discovered": 2, 00:27:16.575 "num_base_bdevs_operational": 2, 00:27:16.575 "process": { 00:27:16.575 "type": "rebuild", 00:27:16.575 "target": "spare", 00:27:16.575 "progress": { 00:27:16.575 "blocks": 5376, 00:27:16.575 "percent": 67 00:27:16.575 } 00:27:16.575 }, 00:27:16.575 "base_bdevs_list": [ 00:27:16.575 { 00:27:16.575 "name": "spare", 00:27:16.575 "uuid": "91487cda-7c9c-5835-90da-c693ed43d55b", 00:27:16.575 "is_configured": true, 00:27:16.575 "data_offset": 256, 00:27:16.575 "data_size": 7936 00:27:16.575 }, 00:27:16.575 { 00:27:16.575 "name": "BaseBdev2", 00:27:16.575 "uuid": "5ea7cecd-ccb2-5d27-9e4a-045d6dc9b4f0", 00:27:16.575 "is_configured": true, 00:27:16.575 "data_offset": 256, 00:27:16.575 "data_size": 7936 00:27:16.575 } 00:27:16.575 ] 00:27:16.575 }' 00:27:16.575 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:16.575 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:16.575 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:16.575 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:16.575 12:58:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:17.506 [2024-12-05 12:58:59.810395] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:17.506 [2024-12-05 12:58:59.810472] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:17.506 [2024-12-05 12:58:59.810593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:17.506 12:58:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:17.506 12:58:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:17.506 12:58:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:17.506 12:58:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:17.506 12:58:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:17.506 12:58:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:17.506 12:58:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:17.506 12:58:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:17.506 12:58:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.506 12:58:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:17.506 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.506 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:17.506 "name": "raid_bdev1", 00:27:17.506 "uuid": "10970a82-e844-4133-91ba-2fc28797bb4c", 00:27:17.506 "strip_size_kb": 0, 00:27:17.506 "state": "online", 00:27:17.506 "raid_level": "raid1", 00:27:17.506 "superblock": true, 00:27:17.506 "num_base_bdevs": 2, 00:27:17.506 "num_base_bdevs_discovered": 2, 00:27:17.506 "num_base_bdevs_operational": 2, 00:27:17.506 "base_bdevs_list": [ 00:27:17.506 { 00:27:17.506 "name": "spare", 00:27:17.506 "uuid": "91487cda-7c9c-5835-90da-c693ed43d55b", 00:27:17.506 "is_configured": true, 00:27:17.506 "data_offset": 256, 00:27:17.506 "data_size": 7936 00:27:17.506 }, 00:27:17.506 { 00:27:17.506 "name": "BaseBdev2", 00:27:17.506 "uuid": "5ea7cecd-ccb2-5d27-9e4a-045d6dc9b4f0", 00:27:17.506 "is_configured": true, 00:27:17.506 "data_offset": 256, 00:27:17.506 "data_size": 7936 00:27:17.506 } 00:27:17.506 ] 00:27:17.506 }' 00:27:17.506 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:17.506 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:17.506 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:17.764 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:27:17.764 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:27:17.764 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:17.764 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:17.764 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:17.764 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:17.764 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:17.764 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:17.764 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.764 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:17.764 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:17.764 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.764 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:17.764 "name": "raid_bdev1", 00:27:17.764 "uuid": "10970a82-e844-4133-91ba-2fc28797bb4c", 00:27:17.764 "strip_size_kb": 0, 00:27:17.764 "state": "online", 00:27:17.764 "raid_level": "raid1", 00:27:17.764 "superblock": true, 00:27:17.764 "num_base_bdevs": 2, 00:27:17.764 "num_base_bdevs_discovered": 2, 00:27:17.764 "num_base_bdevs_operational": 2, 00:27:17.764 "base_bdevs_list": [ 00:27:17.764 { 00:27:17.764 "name": "spare", 00:27:17.764 "uuid": "91487cda-7c9c-5835-90da-c693ed43d55b", 00:27:17.764 "is_configured": true, 00:27:17.764 "data_offset": 256, 00:27:17.764 "data_size": 7936 00:27:17.764 }, 00:27:17.764 { 00:27:17.764 "name": "BaseBdev2", 00:27:17.764 "uuid": "5ea7cecd-ccb2-5d27-9e4a-045d6dc9b4f0", 00:27:17.764 "is_configured": true, 00:27:17.764 "data_offset": 256, 00:27:17.764 "data_size": 7936 00:27:17.764 } 00:27:17.764 ] 00:27:17.764 }' 00:27:17.764 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:17.764 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:17.764 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:17.764 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:17.764 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:17.764 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:17.764 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:17.764 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:17.764 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:17.764 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:17.764 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:17.764 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:17.764 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:17.764 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:17.764 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:17.764 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.764 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:17.764 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:17.764 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.764 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:17.764 "name": "raid_bdev1", 00:27:17.764 "uuid": "10970a82-e844-4133-91ba-2fc28797bb4c", 00:27:17.764 "strip_size_kb": 0, 00:27:17.764 "state": "online", 00:27:17.764 "raid_level": "raid1", 00:27:17.764 "superblock": true, 00:27:17.764 "num_base_bdevs": 2, 00:27:17.764 "num_base_bdevs_discovered": 2, 00:27:17.764 "num_base_bdevs_operational": 2, 00:27:17.764 "base_bdevs_list": [ 00:27:17.764 { 00:27:17.764 "name": "spare", 00:27:17.764 "uuid": "91487cda-7c9c-5835-90da-c693ed43d55b", 00:27:17.764 "is_configured": true, 00:27:17.764 "data_offset": 256, 00:27:17.764 "data_size": 7936 00:27:17.764 }, 00:27:17.764 { 00:27:17.764 "name": "BaseBdev2", 00:27:17.764 "uuid": "5ea7cecd-ccb2-5d27-9e4a-045d6dc9b4f0", 00:27:17.764 "is_configured": true, 00:27:17.764 "data_offset": 256, 00:27:17.764 "data_size": 7936 00:27:17.764 } 00:27:17.764 ] 00:27:17.764 }' 00:27:17.764 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:17.764 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:18.023 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:18.023 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.023 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:18.023 [2024-12-05 12:59:00.505347] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:18.023 [2024-12-05 12:59:00.505499] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:18.023 [2024-12-05 12:59:00.505572] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:18.023 [2024-12-05 12:59:00.505632] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:18.023 [2024-12-05 12:59:00.505643] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:27:18.023 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.023 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:27:18.023 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:18.023 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.023 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:18.023 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.023 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:27:18.023 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:27:18.023 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:27:18.023 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:18.023 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:27:18.023 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:27:18.023 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:18.023 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:18.023 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:18.023 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:27:18.023 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:18.023 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:18.023 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:18.293 /dev/nbd0 00:27:18.293 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:18.293 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:18.293 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:27:18.293 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:27:18.293 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:18.293 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:18.293 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:27:18.293 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:27:18.293 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:18.293 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:18.294 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:18.294 1+0 records in 00:27:18.294 1+0 records out 00:27:18.294 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039089 s, 10.5 MB/s 00:27:18.294 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:18.294 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:27:18.294 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:18.294 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:18.294 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:27:18.294 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:18.294 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:18.294 12:59:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:27:18.551 /dev/nbd1 00:27:18.551 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:18.551 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:18.551 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:27:18.551 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:27:18.551 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:18.551 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:18.551 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:27:18.551 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:27:18.551 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:18.551 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:18.551 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:18.551 1+0 records in 00:27:18.551 1+0 records out 00:27:18.551 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225144 s, 18.2 MB/s 00:27:18.551 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:18.551 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:27:18.551 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:18.551 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:18.551 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:27:18.551 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:18.551 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:18.551 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:27:18.808 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:27:18.808 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:27:18.808 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:18.808 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:18.808 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:27:18.808 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:18.808 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:27:19.095 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:19.096 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:19.096 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:19.096 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:19.096 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:19.096 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:19.096 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:27:19.096 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:27:19.096 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:19.096 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:27:19.096 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:19.096 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:19.096 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:19.096 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:19.096 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:19.096 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:19.096 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:27:19.096 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:27:19.096 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:27:19.096 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:27:19.096 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.096 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:19.096 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.096 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:27:19.096 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.096 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:19.096 [2024-12-05 12:59:01.644850] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:19.096 [2024-12-05 12:59:01.644905] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:19.096 [2024-12-05 12:59:01.644927] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:27:19.096 [2024-12-05 12:59:01.644936] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:19.096 [2024-12-05 12:59:01.646829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:19.096 [2024-12-05 12:59:01.646862] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:19.096 [2024-12-05 12:59:01.646950] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:19.096 [2024-12-05 12:59:01.646994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:19.096 [2024-12-05 12:59:01.647104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:19.096 spare 00:27:19.096 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.096 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:27:19.096 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.096 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:19.452 [2024-12-05 12:59:01.747195] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:27:19.452 [2024-12-05 12:59:01.747247] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:19.452 [2024-12-05 12:59:01.747538] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:27:19.452 [2024-12-05 12:59:01.747705] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:27:19.452 [2024-12-05 12:59:01.747724] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:27:19.452 [2024-12-05 12:59:01.747875] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:19.452 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.452 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:19.452 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:19.452 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:19.452 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:19.452 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:19.452 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:19.452 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:19.452 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:19.452 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:19.452 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:19.452 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:19.452 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:19.452 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.452 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:19.452 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.453 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:19.453 "name": "raid_bdev1", 00:27:19.453 "uuid": "10970a82-e844-4133-91ba-2fc28797bb4c", 00:27:19.453 "strip_size_kb": 0, 00:27:19.453 "state": "online", 00:27:19.453 "raid_level": "raid1", 00:27:19.453 "superblock": true, 00:27:19.453 "num_base_bdevs": 2, 00:27:19.453 "num_base_bdevs_discovered": 2, 00:27:19.453 "num_base_bdevs_operational": 2, 00:27:19.453 "base_bdevs_list": [ 00:27:19.453 { 00:27:19.453 "name": "spare", 00:27:19.453 "uuid": "91487cda-7c9c-5835-90da-c693ed43d55b", 00:27:19.453 "is_configured": true, 00:27:19.453 "data_offset": 256, 00:27:19.453 "data_size": 7936 00:27:19.453 }, 00:27:19.453 { 00:27:19.453 "name": "BaseBdev2", 00:27:19.453 "uuid": "5ea7cecd-ccb2-5d27-9e4a-045d6dc9b4f0", 00:27:19.453 "is_configured": true, 00:27:19.453 "data_offset": 256, 00:27:19.453 "data_size": 7936 00:27:19.453 } 00:27:19.453 ] 00:27:19.453 }' 00:27:19.453 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:19.453 12:59:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:19.711 "name": "raid_bdev1", 00:27:19.711 "uuid": "10970a82-e844-4133-91ba-2fc28797bb4c", 00:27:19.711 "strip_size_kb": 0, 00:27:19.711 "state": "online", 00:27:19.711 "raid_level": "raid1", 00:27:19.711 "superblock": true, 00:27:19.711 "num_base_bdevs": 2, 00:27:19.711 "num_base_bdevs_discovered": 2, 00:27:19.711 "num_base_bdevs_operational": 2, 00:27:19.711 "base_bdevs_list": [ 00:27:19.711 { 00:27:19.711 "name": "spare", 00:27:19.711 "uuid": "91487cda-7c9c-5835-90da-c693ed43d55b", 00:27:19.711 "is_configured": true, 00:27:19.711 "data_offset": 256, 00:27:19.711 "data_size": 7936 00:27:19.711 }, 00:27:19.711 { 00:27:19.711 "name": "BaseBdev2", 00:27:19.711 "uuid": "5ea7cecd-ccb2-5d27-9e4a-045d6dc9b4f0", 00:27:19.711 "is_configured": true, 00:27:19.711 "data_offset": 256, 00:27:19.711 "data_size": 7936 00:27:19.711 } 00:27:19.711 ] 00:27:19.711 }' 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:19.711 [2024-12-05 12:59:02.188983] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:19.711 "name": "raid_bdev1", 00:27:19.711 "uuid": "10970a82-e844-4133-91ba-2fc28797bb4c", 00:27:19.711 "strip_size_kb": 0, 00:27:19.711 "state": "online", 00:27:19.711 "raid_level": "raid1", 00:27:19.711 "superblock": true, 00:27:19.711 "num_base_bdevs": 2, 00:27:19.711 "num_base_bdevs_discovered": 1, 00:27:19.711 "num_base_bdevs_operational": 1, 00:27:19.711 "base_bdevs_list": [ 00:27:19.711 { 00:27:19.711 "name": null, 00:27:19.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:19.711 "is_configured": false, 00:27:19.711 "data_offset": 0, 00:27:19.711 "data_size": 7936 00:27:19.711 }, 00:27:19.711 { 00:27:19.711 "name": "BaseBdev2", 00:27:19.711 "uuid": "5ea7cecd-ccb2-5d27-9e4a-045d6dc9b4f0", 00:27:19.711 "is_configured": true, 00:27:19.711 "data_offset": 256, 00:27:19.711 "data_size": 7936 00:27:19.711 } 00:27:19.711 ] 00:27:19.711 }' 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:19.711 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:19.968 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:27:19.968 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.968 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:19.968 [2024-12-05 12:59:02.505094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:19.968 [2024-12-05 12:59:02.505281] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:19.968 [2024-12-05 12:59:02.505296] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:19.968 [2024-12-05 12:59:02.505329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:19.968 [2024-12-05 12:59:02.514708] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:27:19.968 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.968 12:59:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:27:19.968 [2024-12-05 12:59:02.516371] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:21.338 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:21.338 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:21.338 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:21.338 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:21.338 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:21.338 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:21.338 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:21.338 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.338 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:21.338 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.338 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:21.338 "name": "raid_bdev1", 00:27:21.338 "uuid": "10970a82-e844-4133-91ba-2fc28797bb4c", 00:27:21.338 "strip_size_kb": 0, 00:27:21.338 "state": "online", 00:27:21.338 "raid_level": "raid1", 00:27:21.338 "superblock": true, 00:27:21.338 "num_base_bdevs": 2, 00:27:21.338 "num_base_bdevs_discovered": 2, 00:27:21.338 "num_base_bdevs_operational": 2, 00:27:21.338 "process": { 00:27:21.338 "type": "rebuild", 00:27:21.338 "target": "spare", 00:27:21.338 "progress": { 00:27:21.338 "blocks": 2560, 00:27:21.339 "percent": 32 00:27:21.339 } 00:27:21.339 }, 00:27:21.339 "base_bdevs_list": [ 00:27:21.339 { 00:27:21.339 "name": "spare", 00:27:21.339 "uuid": "91487cda-7c9c-5835-90da-c693ed43d55b", 00:27:21.339 "is_configured": true, 00:27:21.339 "data_offset": 256, 00:27:21.339 "data_size": 7936 00:27:21.339 }, 00:27:21.339 { 00:27:21.339 "name": "BaseBdev2", 00:27:21.339 "uuid": "5ea7cecd-ccb2-5d27-9e4a-045d6dc9b4f0", 00:27:21.339 "is_configured": true, 00:27:21.339 "data_offset": 256, 00:27:21.339 "data_size": 7936 00:27:21.339 } 00:27:21.339 ] 00:27:21.339 }' 00:27:21.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:21.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:21.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:21.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:21.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:27:21.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:21.339 [2024-12-05 12:59:03.622358] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:21.339 [2024-12-05 12:59:03.722053] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:21.339 [2024-12-05 12:59:03.722131] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:21.339 [2024-12-05 12:59:03.722144] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:21.339 [2024-12-05 12:59:03.722152] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:21.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:21.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:21.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:21.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:21.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:21.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:21.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:21.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:21.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:21.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:21.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:21.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:21.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:21.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:21.339 "name": "raid_bdev1", 00:27:21.339 "uuid": "10970a82-e844-4133-91ba-2fc28797bb4c", 00:27:21.339 "strip_size_kb": 0, 00:27:21.339 "state": "online", 00:27:21.339 "raid_level": "raid1", 00:27:21.339 "superblock": true, 00:27:21.339 "num_base_bdevs": 2, 00:27:21.339 "num_base_bdevs_discovered": 1, 00:27:21.339 "num_base_bdevs_operational": 1, 00:27:21.339 "base_bdevs_list": [ 00:27:21.339 { 00:27:21.339 "name": null, 00:27:21.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:21.339 "is_configured": false, 00:27:21.339 "data_offset": 0, 00:27:21.339 "data_size": 7936 00:27:21.339 }, 00:27:21.339 { 00:27:21.339 "name": "BaseBdev2", 00:27:21.339 "uuid": "5ea7cecd-ccb2-5d27-9e4a-045d6dc9b4f0", 00:27:21.339 "is_configured": true, 00:27:21.339 "data_offset": 256, 00:27:21.339 "data_size": 7936 00:27:21.339 } 00:27:21.339 ] 00:27:21.339 }' 00:27:21.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:21.339 12:59:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:21.597 12:59:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:27:21.597 12:59:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.597 12:59:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:21.597 [2024-12-05 12:59:04.064798] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:21.597 [2024-12-05 12:59:04.064859] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:21.597 [2024-12-05 12:59:04.064876] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:27:21.597 [2024-12-05 12:59:04.064885] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:21.597 [2024-12-05 12:59:04.065257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:21.597 [2024-12-05 12:59:04.065279] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:21.597 [2024-12-05 12:59:04.065330] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:21.597 [2024-12-05 12:59:04.065341] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:21.597 [2024-12-05 12:59:04.065349] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:21.597 [2024-12-05 12:59:04.065369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:21.597 [2024-12-05 12:59:04.073967] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:27:21.597 spare 00:27:21.597 12:59:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.597 12:59:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:27:21.597 [2024-12-05 12:59:04.075536] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:22.529 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:22.530 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:22.530 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:22.530 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:22.530 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:22.530 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:22.530 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:22.530 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.530 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:22.530 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.530 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:22.530 "name": "raid_bdev1", 00:27:22.530 "uuid": "10970a82-e844-4133-91ba-2fc28797bb4c", 00:27:22.530 "strip_size_kb": 0, 00:27:22.530 "state": "online", 00:27:22.530 "raid_level": "raid1", 00:27:22.530 "superblock": true, 00:27:22.530 "num_base_bdevs": 2, 00:27:22.530 "num_base_bdevs_discovered": 2, 00:27:22.530 "num_base_bdevs_operational": 2, 00:27:22.530 "process": { 00:27:22.530 "type": "rebuild", 00:27:22.530 "target": "spare", 00:27:22.530 "progress": { 00:27:22.530 "blocks": 2560, 00:27:22.530 "percent": 32 00:27:22.530 } 00:27:22.530 }, 00:27:22.530 "base_bdevs_list": [ 00:27:22.530 { 00:27:22.530 "name": "spare", 00:27:22.530 "uuid": "91487cda-7c9c-5835-90da-c693ed43d55b", 00:27:22.530 "is_configured": true, 00:27:22.530 "data_offset": 256, 00:27:22.530 "data_size": 7936 00:27:22.530 }, 00:27:22.530 { 00:27:22.530 "name": "BaseBdev2", 00:27:22.530 "uuid": "5ea7cecd-ccb2-5d27-9e4a-045d6dc9b4f0", 00:27:22.530 "is_configured": true, 00:27:22.530 "data_offset": 256, 00:27:22.530 "data_size": 7936 00:27:22.530 } 00:27:22.530 ] 00:27:22.530 }' 00:27:22.530 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:22.786 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:22.786 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:22.786 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:22.786 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:27:22.786 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.786 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:22.786 [2024-12-05 12:59:05.173852] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:22.786 [2024-12-05 12:59:05.180782] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:22.786 [2024-12-05 12:59:05.180839] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:22.786 [2024-12-05 12:59:05.180854] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:22.786 [2024-12-05 12:59:05.180860] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:22.786 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.786 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:22.786 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:22.786 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:22.786 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:22.786 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:22.786 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:22.786 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:22.786 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:22.786 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:22.786 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:22.786 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:22.786 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:22.786 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.786 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:22.786 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.786 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:22.786 "name": "raid_bdev1", 00:27:22.786 "uuid": "10970a82-e844-4133-91ba-2fc28797bb4c", 00:27:22.786 "strip_size_kb": 0, 00:27:22.786 "state": "online", 00:27:22.786 "raid_level": "raid1", 00:27:22.786 "superblock": true, 00:27:22.786 "num_base_bdevs": 2, 00:27:22.786 "num_base_bdevs_discovered": 1, 00:27:22.786 "num_base_bdevs_operational": 1, 00:27:22.786 "base_bdevs_list": [ 00:27:22.786 { 00:27:22.786 "name": null, 00:27:22.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:22.786 "is_configured": false, 00:27:22.786 "data_offset": 0, 00:27:22.786 "data_size": 7936 00:27:22.786 }, 00:27:22.786 { 00:27:22.786 "name": "BaseBdev2", 00:27:22.786 "uuid": "5ea7cecd-ccb2-5d27-9e4a-045d6dc9b4f0", 00:27:22.786 "is_configured": true, 00:27:22.786 "data_offset": 256, 00:27:22.786 "data_size": 7936 00:27:22.786 } 00:27:22.786 ] 00:27:22.786 }' 00:27:22.786 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:22.786 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:23.043 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:23.043 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:23.043 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:23.043 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:23.043 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:23.043 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:23.043 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.043 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:23.044 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:23.044 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.044 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:23.044 "name": "raid_bdev1", 00:27:23.044 "uuid": "10970a82-e844-4133-91ba-2fc28797bb4c", 00:27:23.044 "strip_size_kb": 0, 00:27:23.044 "state": "online", 00:27:23.044 "raid_level": "raid1", 00:27:23.044 "superblock": true, 00:27:23.044 "num_base_bdevs": 2, 00:27:23.044 "num_base_bdevs_discovered": 1, 00:27:23.044 "num_base_bdevs_operational": 1, 00:27:23.044 "base_bdevs_list": [ 00:27:23.044 { 00:27:23.044 "name": null, 00:27:23.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:23.044 "is_configured": false, 00:27:23.044 "data_offset": 0, 00:27:23.044 "data_size": 7936 00:27:23.044 }, 00:27:23.044 { 00:27:23.044 "name": "BaseBdev2", 00:27:23.044 "uuid": "5ea7cecd-ccb2-5d27-9e4a-045d6dc9b4f0", 00:27:23.044 "is_configured": true, 00:27:23.044 "data_offset": 256, 00:27:23.044 "data_size": 7936 00:27:23.044 } 00:27:23.044 ] 00:27:23.044 }' 00:27:23.044 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:23.044 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:23.044 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:23.044 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:23.044 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:27:23.044 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.044 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:23.044 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.044 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:23.044 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.044 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:23.044 [2024-12-05 12:59:05.608292] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:23.044 [2024-12-05 12:59:05.608343] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:23.044 [2024-12-05 12:59:05.608366] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:27:23.044 [2024-12-05 12:59:05.608374] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:23.044 [2024-12-05 12:59:05.608777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:23.044 [2024-12-05 12:59:05.608802] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:23.044 [2024-12-05 12:59:05.608866] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:27:23.044 [2024-12-05 12:59:05.608879] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:23.044 [2024-12-05 12:59:05.608889] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:23.044 [2024-12-05 12:59:05.608897] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:27:23.044 BaseBdev1 00:27:23.044 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.044 12:59:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:27:24.416 12:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:24.416 12:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:24.416 12:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:24.416 12:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:24.416 12:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:24.416 12:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:24.416 12:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:24.416 12:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:24.416 12:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:24.416 12:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:24.416 12:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:24.416 12:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:24.416 12:59:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.416 12:59:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:24.416 12:59:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.416 12:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:24.416 "name": "raid_bdev1", 00:27:24.416 "uuid": "10970a82-e844-4133-91ba-2fc28797bb4c", 00:27:24.416 "strip_size_kb": 0, 00:27:24.416 "state": "online", 00:27:24.416 "raid_level": "raid1", 00:27:24.416 "superblock": true, 00:27:24.416 "num_base_bdevs": 2, 00:27:24.416 "num_base_bdevs_discovered": 1, 00:27:24.416 "num_base_bdevs_operational": 1, 00:27:24.416 "base_bdevs_list": [ 00:27:24.416 { 00:27:24.416 "name": null, 00:27:24.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:24.416 "is_configured": false, 00:27:24.416 "data_offset": 0, 00:27:24.416 "data_size": 7936 00:27:24.416 }, 00:27:24.416 { 00:27:24.416 "name": "BaseBdev2", 00:27:24.416 "uuid": "5ea7cecd-ccb2-5d27-9e4a-045d6dc9b4f0", 00:27:24.416 "is_configured": true, 00:27:24.416 "data_offset": 256, 00:27:24.416 "data_size": 7936 00:27:24.416 } 00:27:24.416 ] 00:27:24.416 }' 00:27:24.416 12:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:24.416 12:59:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:24.416 12:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:24.416 12:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:24.416 12:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:24.416 12:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:24.416 12:59:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:24.674 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:24.674 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:24.674 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.674 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:24.674 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.674 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:24.674 "name": "raid_bdev1", 00:27:24.674 "uuid": "10970a82-e844-4133-91ba-2fc28797bb4c", 00:27:24.674 "strip_size_kb": 0, 00:27:24.674 "state": "online", 00:27:24.674 "raid_level": "raid1", 00:27:24.674 "superblock": true, 00:27:24.674 "num_base_bdevs": 2, 00:27:24.674 "num_base_bdevs_discovered": 1, 00:27:24.674 "num_base_bdevs_operational": 1, 00:27:24.674 "base_bdevs_list": [ 00:27:24.674 { 00:27:24.674 "name": null, 00:27:24.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:24.674 "is_configured": false, 00:27:24.674 "data_offset": 0, 00:27:24.674 "data_size": 7936 00:27:24.674 }, 00:27:24.674 { 00:27:24.674 "name": "BaseBdev2", 00:27:24.674 "uuid": "5ea7cecd-ccb2-5d27-9e4a-045d6dc9b4f0", 00:27:24.674 "is_configured": true, 00:27:24.674 "data_offset": 256, 00:27:24.674 "data_size": 7936 00:27:24.674 } 00:27:24.674 ] 00:27:24.674 }' 00:27:24.674 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:24.674 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:24.674 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:24.674 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:24.674 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:24.674 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:27:24.674 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:24.674 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:24.674 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:24.674 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:24.674 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:24.675 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:24.675 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.675 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:24.675 [2024-12-05 12:59:07.112660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:24.675 [2024-12-05 12:59:07.112798] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:24.675 [2024-12-05 12:59:07.112817] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:24.675 request: 00:27:24.675 { 00:27:24.675 "base_bdev": "BaseBdev1", 00:27:24.675 "raid_bdev": "raid_bdev1", 00:27:24.675 "method": "bdev_raid_add_base_bdev", 00:27:24.675 "req_id": 1 00:27:24.675 } 00:27:24.675 Got JSON-RPC error response 00:27:24.675 response: 00:27:24.675 { 00:27:24.675 "code": -22, 00:27:24.675 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:27:24.675 } 00:27:24.675 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:24.675 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:27:24.675 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:24.675 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:24.675 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:24.675 12:59:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:27:25.605 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:25.605 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:25.605 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:25.605 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:25.605 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:25.605 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:25.605 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:25.605 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:25.605 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:25.605 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:25.605 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:25.605 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.605 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:25.605 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:25.605 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.605 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:25.605 "name": "raid_bdev1", 00:27:25.605 "uuid": "10970a82-e844-4133-91ba-2fc28797bb4c", 00:27:25.605 "strip_size_kb": 0, 00:27:25.605 "state": "online", 00:27:25.605 "raid_level": "raid1", 00:27:25.605 "superblock": true, 00:27:25.605 "num_base_bdevs": 2, 00:27:25.605 "num_base_bdevs_discovered": 1, 00:27:25.605 "num_base_bdevs_operational": 1, 00:27:25.605 "base_bdevs_list": [ 00:27:25.605 { 00:27:25.605 "name": null, 00:27:25.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:25.605 "is_configured": false, 00:27:25.605 "data_offset": 0, 00:27:25.605 "data_size": 7936 00:27:25.605 }, 00:27:25.605 { 00:27:25.605 "name": "BaseBdev2", 00:27:25.605 "uuid": "5ea7cecd-ccb2-5d27-9e4a-045d6dc9b4f0", 00:27:25.605 "is_configured": true, 00:27:25.605 "data_offset": 256, 00:27:25.605 "data_size": 7936 00:27:25.605 } 00:27:25.605 ] 00:27:25.605 }' 00:27:25.606 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:25.606 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:25.863 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:25.863 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:25.863 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:25.863 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:25.863 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:25.863 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:25.863 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:25.863 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.863 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:25.863 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.120 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:26.120 "name": "raid_bdev1", 00:27:26.120 "uuid": "10970a82-e844-4133-91ba-2fc28797bb4c", 00:27:26.120 "strip_size_kb": 0, 00:27:26.120 "state": "online", 00:27:26.120 "raid_level": "raid1", 00:27:26.120 "superblock": true, 00:27:26.120 "num_base_bdevs": 2, 00:27:26.120 "num_base_bdevs_discovered": 1, 00:27:26.120 "num_base_bdevs_operational": 1, 00:27:26.120 "base_bdevs_list": [ 00:27:26.120 { 00:27:26.120 "name": null, 00:27:26.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:26.120 "is_configured": false, 00:27:26.120 "data_offset": 0, 00:27:26.120 "data_size": 7936 00:27:26.120 }, 00:27:26.120 { 00:27:26.120 "name": "BaseBdev2", 00:27:26.120 "uuid": "5ea7cecd-ccb2-5d27-9e4a-045d6dc9b4f0", 00:27:26.120 "is_configured": true, 00:27:26.120 "data_offset": 256, 00:27:26.120 "data_size": 7936 00:27:26.120 } 00:27:26.120 ] 00:27:26.120 }' 00:27:26.120 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:26.120 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:26.120 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:26.120 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:26.120 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 83774 00:27:26.121 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 83774 ']' 00:27:26.121 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 83774 00:27:26.121 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:27:26.121 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:26.121 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83774 00:27:26.121 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:26.121 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:26.121 killing process with pid 83774 00:27:26.121 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83774' 00:27:26.121 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 83774 00:27:26.121 Received shutdown signal, test time was about 60.000000 seconds 00:27:26.121 00:27:26.121 Latency(us) 00:27:26.121 [2024-12-05T12:59:08.708Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:26.121 [2024-12-05T12:59:08.708Z] =================================================================================================================== 00:27:26.121 [2024-12-05T12:59:08.708Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:26.121 [2024-12-05 12:59:08.534102] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:26.121 12:59:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 83774 00:27:26.121 [2024-12-05 12:59:08.534202] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:26.121 [2024-12-05 12:59:08.534245] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:26.121 [2024-12-05 12:59:08.534255] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:27:26.121 [2024-12-05 12:59:08.682348] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:26.685 12:59:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:27:26.685 00:27:26.685 real 0m17.168s 00:27:26.685 user 0m21.808s 00:27:26.685 sys 0m1.949s 00:27:26.685 12:59:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:26.685 12:59:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:27:26.685 ************************************ 00:27:26.685 END TEST raid_rebuild_test_sb_4k 00:27:26.685 ************************************ 00:27:26.943 12:59:09 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:27:26.943 12:59:09 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:27:26.943 12:59:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:27:26.943 12:59:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:26.943 12:59:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:26.943 ************************************ 00:27:26.943 START TEST raid_state_function_test_sb_md_separate 00:27:26.943 ************************************ 00:27:26.943 12:59:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:27:26.943 12:59:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:27:26.943 12:59:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:27:26.943 12:59:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:27:26.943 12:59:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:27:26.943 12:59:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:27:26.943 12:59:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:26.943 12:59:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:27:26.943 12:59:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:26.943 12:59:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:26.943 12:59:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:27:26.943 12:59:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:26.943 12:59:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:26.943 12:59:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:26.943 12:59:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:27:26.944 12:59:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:27:26.944 12:59:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:27:26.944 12:59:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:27:26.944 12:59:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:27:26.944 12:59:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:27:26.944 12:59:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:27:26.944 12:59:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:27:26.944 12:59:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:27:26.944 12:59:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=84442 00:27:26.944 Process raid pid: 84442 00:27:26.944 12:59:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84442' 00:27:26.944 12:59:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 84442 00:27:26.944 12:59:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 84442 ']' 00:27:26.944 12:59:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:26.944 12:59:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:26.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:26.944 12:59:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:26.944 12:59:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:26.944 12:59:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:26.944 12:59:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:27:26.944 [2024-12-05 12:59:09.348633] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:27:26.944 [2024-12-05 12:59:09.348734] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:26.944 [2024-12-05 12:59:09.498729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.200 [2024-12-05 12:59:09.585924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.200 [2024-12-05 12:59:09.698971] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:27.200 [2024-12-05 12:59:09.699017] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:27.800 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:27.800 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:27:27.800 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:27:27.800 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.800 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:27.800 [2024-12-05 12:59:10.224314] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:27.800 [2024-12-05 12:59:10.224366] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:27.800 [2024-12-05 12:59:10.224374] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:27.800 [2024-12-05 12:59:10.224382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:27.800 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.800 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:27.800 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:27.800 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:27.800 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:27.800 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:27.800 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:27.800 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:27.800 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:27.800 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:27.800 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:27.800 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:27.800 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:27.800 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.800 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:27.800 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.800 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:27.800 "name": "Existed_Raid", 00:27:27.800 "uuid": "520bd4d8-5ee1-483e-a449-91a02c704fc9", 00:27:27.800 "strip_size_kb": 0, 00:27:27.800 "state": "configuring", 00:27:27.800 "raid_level": "raid1", 00:27:27.800 "superblock": true, 00:27:27.800 "num_base_bdevs": 2, 00:27:27.800 "num_base_bdevs_discovered": 0, 00:27:27.800 "num_base_bdevs_operational": 2, 00:27:27.800 "base_bdevs_list": [ 00:27:27.800 { 00:27:27.800 "name": "BaseBdev1", 00:27:27.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:27.800 "is_configured": false, 00:27:27.800 "data_offset": 0, 00:27:27.800 "data_size": 0 00:27:27.800 }, 00:27:27.800 { 00:27:27.800 "name": "BaseBdev2", 00:27:27.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:27.800 "is_configured": false, 00:27:27.800 "data_offset": 0, 00:27:27.800 "data_size": 0 00:27:27.800 } 00:27:27.800 ] 00:27:27.800 }' 00:27:27.800 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:27.800 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:28.077 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:28.077 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.077 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:28.077 [2024-12-05 12:59:10.544327] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:28.077 [2024-12-05 12:59:10.544361] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:27:28.077 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.077 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:27:28.077 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.077 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:28.077 [2024-12-05 12:59:10.552321] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:28.077 [2024-12-05 12:59:10.552354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:28.077 [2024-12-05 12:59:10.552361] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:28.077 [2024-12-05 12:59:10.552370] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:28.077 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.077 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:27:28.077 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.077 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:28.077 [2024-12-05 12:59:10.581027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:28.077 BaseBdev1 00:27:28.077 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.077 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:27:28.077 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:27:28.077 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:28.077 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:27:28.077 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:28.077 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:28.077 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:28.077 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.077 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:28.077 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.077 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:28.077 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.077 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:28.077 [ 00:27:28.077 { 00:27:28.077 "name": "BaseBdev1", 00:27:28.077 "aliases": [ 00:27:28.077 "a3813d36-01ce-4c7a-a2c9-fa52dfc6ce69" 00:27:28.077 ], 00:27:28.077 "product_name": "Malloc disk", 00:27:28.077 "block_size": 4096, 00:27:28.077 "num_blocks": 8192, 00:27:28.077 "uuid": "a3813d36-01ce-4c7a-a2c9-fa52dfc6ce69", 00:27:28.077 "md_size": 32, 00:27:28.077 "md_interleave": false, 00:27:28.077 "dif_type": 0, 00:27:28.077 "assigned_rate_limits": { 00:27:28.077 "rw_ios_per_sec": 0, 00:27:28.077 "rw_mbytes_per_sec": 0, 00:27:28.077 "r_mbytes_per_sec": 0, 00:27:28.077 "w_mbytes_per_sec": 0 00:27:28.077 }, 00:27:28.077 "claimed": true, 00:27:28.077 "claim_type": "exclusive_write", 00:27:28.077 "zoned": false, 00:27:28.077 "supported_io_types": { 00:27:28.077 "read": true, 00:27:28.077 "write": true, 00:27:28.077 "unmap": true, 00:27:28.077 "flush": true, 00:27:28.077 "reset": true, 00:27:28.077 "nvme_admin": false, 00:27:28.077 "nvme_io": false, 00:27:28.077 "nvme_io_md": false, 00:27:28.077 "write_zeroes": true, 00:27:28.077 "zcopy": true, 00:27:28.077 "get_zone_info": false, 00:27:28.077 "zone_management": false, 00:27:28.077 "zone_append": false, 00:27:28.077 "compare": false, 00:27:28.077 "compare_and_write": false, 00:27:28.077 "abort": true, 00:27:28.077 "seek_hole": false, 00:27:28.077 "seek_data": false, 00:27:28.077 "copy": true, 00:27:28.077 "nvme_iov_md": false 00:27:28.077 }, 00:27:28.077 "memory_domains": [ 00:27:28.077 { 00:27:28.077 "dma_device_id": "system", 00:27:28.077 "dma_device_type": 1 00:27:28.077 }, 00:27:28.077 { 00:27:28.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:28.078 "dma_device_type": 2 00:27:28.078 } 00:27:28.078 ], 00:27:28.078 "driver_specific": {} 00:27:28.078 } 00:27:28.078 ] 00:27:28.078 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.078 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:27:28.078 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:28.078 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:28.078 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:28.078 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:28.078 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:28.078 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:28.078 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:28.078 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:28.078 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:28.078 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:28.078 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:28.078 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:28.078 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.078 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:28.078 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.078 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:28.078 "name": "Existed_Raid", 00:27:28.078 "uuid": "dbda4476-cb85-4e70-a4a9-5728800386f3", 00:27:28.078 "strip_size_kb": 0, 00:27:28.078 "state": "configuring", 00:27:28.078 "raid_level": "raid1", 00:27:28.078 "superblock": true, 00:27:28.078 "num_base_bdevs": 2, 00:27:28.078 "num_base_bdevs_discovered": 1, 00:27:28.078 "num_base_bdevs_operational": 2, 00:27:28.078 "base_bdevs_list": [ 00:27:28.078 { 00:27:28.078 "name": "BaseBdev1", 00:27:28.078 "uuid": "a3813d36-01ce-4c7a-a2c9-fa52dfc6ce69", 00:27:28.078 "is_configured": true, 00:27:28.078 "data_offset": 256, 00:27:28.078 "data_size": 7936 00:27:28.078 }, 00:27:28.078 { 00:27:28.078 "name": "BaseBdev2", 00:27:28.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:28.078 "is_configured": false, 00:27:28.078 "data_offset": 0, 00:27:28.078 "data_size": 0 00:27:28.078 } 00:27:28.078 ] 00:27:28.078 }' 00:27:28.078 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:28.078 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:28.335 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:28.335 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.335 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:28.335 [2024-12-05 12:59:10.905145] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:28.335 [2024-12-05 12:59:10.905191] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:27:28.335 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.336 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:27:28.336 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.336 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:28.336 [2024-12-05 12:59:10.913171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:28.336 [2024-12-05 12:59:10.914687] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:28.336 [2024-12-05 12:59:10.914722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:28.336 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.336 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:27:28.336 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:28.336 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:28.336 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:28.336 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:28.336 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:28.336 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:28.336 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:28.336 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:28.336 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:28.336 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:28.336 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:28.593 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:28.593 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.593 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:28.593 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:28.593 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.593 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:28.593 "name": "Existed_Raid", 00:27:28.593 "uuid": "2efebe11-662c-44f0-9931-f5e7922cd7d6", 00:27:28.593 "strip_size_kb": 0, 00:27:28.593 "state": "configuring", 00:27:28.593 "raid_level": "raid1", 00:27:28.593 "superblock": true, 00:27:28.593 "num_base_bdevs": 2, 00:27:28.593 "num_base_bdevs_discovered": 1, 00:27:28.593 "num_base_bdevs_operational": 2, 00:27:28.593 "base_bdevs_list": [ 00:27:28.593 { 00:27:28.593 "name": "BaseBdev1", 00:27:28.593 "uuid": "a3813d36-01ce-4c7a-a2c9-fa52dfc6ce69", 00:27:28.593 "is_configured": true, 00:27:28.593 "data_offset": 256, 00:27:28.593 "data_size": 7936 00:27:28.593 }, 00:27:28.593 { 00:27:28.593 "name": "BaseBdev2", 00:27:28.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:28.593 "is_configured": false, 00:27:28.594 "data_offset": 0, 00:27:28.594 "data_size": 0 00:27:28.594 } 00:27:28.594 ] 00:27:28.594 }' 00:27:28.594 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:28.594 12:59:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:28.852 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:27:28.852 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.852 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:28.852 [2024-12-05 12:59:11.240254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:28.852 [2024-12-05 12:59:11.240432] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:28.852 [2024-12-05 12:59:11.240448] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:28.852 [2024-12-05 12:59:11.240533] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:27:28.852 [2024-12-05 12:59:11.240629] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:28.852 [2024-12-05 12:59:11.240652] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:27:28.852 [2024-12-05 12:59:11.240722] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:28.852 BaseBdev2 00:27:28.852 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.852 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:27:28.852 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:27:28.852 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:28.852 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:27:28.852 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:28.852 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:28.852 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:28.852 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.852 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:28.852 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.852 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:28.852 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.852 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:28.852 [ 00:27:28.852 { 00:27:28.852 "name": "BaseBdev2", 00:27:28.852 "aliases": [ 00:27:28.852 "78a4596b-a8c2-4aa2-8fe0-cd97b2635a95" 00:27:28.852 ], 00:27:28.852 "product_name": "Malloc disk", 00:27:28.852 "block_size": 4096, 00:27:28.852 "num_blocks": 8192, 00:27:28.852 "uuid": "78a4596b-a8c2-4aa2-8fe0-cd97b2635a95", 00:27:28.852 "md_size": 32, 00:27:28.852 "md_interleave": false, 00:27:28.852 "dif_type": 0, 00:27:28.852 "assigned_rate_limits": { 00:27:28.852 "rw_ios_per_sec": 0, 00:27:28.852 "rw_mbytes_per_sec": 0, 00:27:28.852 "r_mbytes_per_sec": 0, 00:27:28.852 "w_mbytes_per_sec": 0 00:27:28.852 }, 00:27:28.852 "claimed": true, 00:27:28.852 "claim_type": "exclusive_write", 00:27:28.852 "zoned": false, 00:27:28.852 "supported_io_types": { 00:27:28.852 "read": true, 00:27:28.852 "write": true, 00:27:28.852 "unmap": true, 00:27:28.852 "flush": true, 00:27:28.852 "reset": true, 00:27:28.852 "nvme_admin": false, 00:27:28.852 "nvme_io": false, 00:27:28.852 "nvme_io_md": false, 00:27:28.852 "write_zeroes": true, 00:27:28.852 "zcopy": true, 00:27:28.852 "get_zone_info": false, 00:27:28.852 "zone_management": false, 00:27:28.852 "zone_append": false, 00:27:28.852 "compare": false, 00:27:28.852 "compare_and_write": false, 00:27:28.852 "abort": true, 00:27:28.852 "seek_hole": false, 00:27:28.852 "seek_data": false, 00:27:28.852 "copy": true, 00:27:28.852 "nvme_iov_md": false 00:27:28.852 }, 00:27:28.852 "memory_domains": [ 00:27:28.852 { 00:27:28.852 "dma_device_id": "system", 00:27:28.852 "dma_device_type": 1 00:27:28.852 }, 00:27:28.852 { 00:27:28.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:28.852 "dma_device_type": 2 00:27:28.852 } 00:27:28.852 ], 00:27:28.852 "driver_specific": {} 00:27:28.852 } 00:27:28.852 ] 00:27:28.852 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.852 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:27:28.852 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:28.852 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:28.852 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:27:28.852 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:28.852 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:28.852 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:28.852 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:28.852 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:28.852 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:28.852 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:28.852 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:28.852 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:28.852 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:28.852 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:28.852 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.852 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:28.852 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.852 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:28.852 "name": "Existed_Raid", 00:27:28.852 "uuid": "2efebe11-662c-44f0-9931-f5e7922cd7d6", 00:27:28.852 "strip_size_kb": 0, 00:27:28.852 "state": "online", 00:27:28.852 "raid_level": "raid1", 00:27:28.852 "superblock": true, 00:27:28.852 "num_base_bdevs": 2, 00:27:28.852 "num_base_bdevs_discovered": 2, 00:27:28.852 "num_base_bdevs_operational": 2, 00:27:28.853 "base_bdevs_list": [ 00:27:28.853 { 00:27:28.853 "name": "BaseBdev1", 00:27:28.853 "uuid": "a3813d36-01ce-4c7a-a2c9-fa52dfc6ce69", 00:27:28.853 "is_configured": true, 00:27:28.853 "data_offset": 256, 00:27:28.853 "data_size": 7936 00:27:28.853 }, 00:27:28.853 { 00:27:28.853 "name": "BaseBdev2", 00:27:28.853 "uuid": "78a4596b-a8c2-4aa2-8fe0-cd97b2635a95", 00:27:28.853 "is_configured": true, 00:27:28.853 "data_offset": 256, 00:27:28.853 "data_size": 7936 00:27:28.853 } 00:27:28.853 ] 00:27:28.853 }' 00:27:28.853 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:28.853 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:29.110 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:27:29.110 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:27:29.110 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:29.110 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:29.110 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:27:29.110 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:29.110 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:29.110 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:27:29.110 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.110 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:29.110 [2024-12-05 12:59:11.584654] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:29.110 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.110 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:29.110 "name": "Existed_Raid", 00:27:29.110 "aliases": [ 00:27:29.110 "2efebe11-662c-44f0-9931-f5e7922cd7d6" 00:27:29.110 ], 00:27:29.110 "product_name": "Raid Volume", 00:27:29.110 "block_size": 4096, 00:27:29.110 "num_blocks": 7936, 00:27:29.110 "uuid": "2efebe11-662c-44f0-9931-f5e7922cd7d6", 00:27:29.110 "md_size": 32, 00:27:29.110 "md_interleave": false, 00:27:29.110 "dif_type": 0, 00:27:29.110 "assigned_rate_limits": { 00:27:29.110 "rw_ios_per_sec": 0, 00:27:29.110 "rw_mbytes_per_sec": 0, 00:27:29.110 "r_mbytes_per_sec": 0, 00:27:29.110 "w_mbytes_per_sec": 0 00:27:29.110 }, 00:27:29.110 "claimed": false, 00:27:29.110 "zoned": false, 00:27:29.110 "supported_io_types": { 00:27:29.110 "read": true, 00:27:29.110 "write": true, 00:27:29.110 "unmap": false, 00:27:29.110 "flush": false, 00:27:29.110 "reset": true, 00:27:29.110 "nvme_admin": false, 00:27:29.110 "nvme_io": false, 00:27:29.110 "nvme_io_md": false, 00:27:29.110 "write_zeroes": true, 00:27:29.110 "zcopy": false, 00:27:29.110 "get_zone_info": false, 00:27:29.110 "zone_management": false, 00:27:29.110 "zone_append": false, 00:27:29.110 "compare": false, 00:27:29.110 "compare_and_write": false, 00:27:29.110 "abort": false, 00:27:29.110 "seek_hole": false, 00:27:29.110 "seek_data": false, 00:27:29.110 "copy": false, 00:27:29.110 "nvme_iov_md": false 00:27:29.110 }, 00:27:29.110 "memory_domains": [ 00:27:29.110 { 00:27:29.110 "dma_device_id": "system", 00:27:29.110 "dma_device_type": 1 00:27:29.110 }, 00:27:29.110 { 00:27:29.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:29.110 "dma_device_type": 2 00:27:29.110 }, 00:27:29.110 { 00:27:29.110 "dma_device_id": "system", 00:27:29.110 "dma_device_type": 1 00:27:29.110 }, 00:27:29.110 { 00:27:29.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:29.110 "dma_device_type": 2 00:27:29.110 } 00:27:29.110 ], 00:27:29.110 "driver_specific": { 00:27:29.110 "raid": { 00:27:29.110 "uuid": "2efebe11-662c-44f0-9931-f5e7922cd7d6", 00:27:29.110 "strip_size_kb": 0, 00:27:29.110 "state": "online", 00:27:29.110 "raid_level": "raid1", 00:27:29.110 "superblock": true, 00:27:29.110 "num_base_bdevs": 2, 00:27:29.110 "num_base_bdevs_discovered": 2, 00:27:29.110 "num_base_bdevs_operational": 2, 00:27:29.110 "base_bdevs_list": [ 00:27:29.110 { 00:27:29.110 "name": "BaseBdev1", 00:27:29.110 "uuid": "a3813d36-01ce-4c7a-a2c9-fa52dfc6ce69", 00:27:29.110 "is_configured": true, 00:27:29.110 "data_offset": 256, 00:27:29.110 "data_size": 7936 00:27:29.110 }, 00:27:29.110 { 00:27:29.110 "name": "BaseBdev2", 00:27:29.111 "uuid": "78a4596b-a8c2-4aa2-8fe0-cd97b2635a95", 00:27:29.111 "is_configured": true, 00:27:29.111 "data_offset": 256, 00:27:29.111 "data_size": 7936 00:27:29.111 } 00:27:29.111 ] 00:27:29.111 } 00:27:29.111 } 00:27:29.111 }' 00:27:29.111 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:29.111 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:27:29.111 BaseBdev2' 00:27:29.111 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:29.111 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:27:29.111 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:29.111 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:27:29.111 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.111 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:29.111 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:29.111 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.369 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:27:29.369 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:27:29.369 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:29.369 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:29.369 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:27:29.369 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.369 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:29.369 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.369 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:27:29.369 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:27:29.369 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:27:29.369 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.369 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:29.369 [2024-12-05 12:59:11.748437] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:29.369 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.369 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:27:29.369 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:27:29.369 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:29.369 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:27:29.369 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:27:29.369 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:27:29.369 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:29.369 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:29.369 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:29.369 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:29.369 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:29.369 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:29.369 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:29.369 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:29.369 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:29.369 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:29.369 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:29.369 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.369 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:29.369 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.369 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:29.369 "name": "Existed_Raid", 00:27:29.369 "uuid": "2efebe11-662c-44f0-9931-f5e7922cd7d6", 00:27:29.369 "strip_size_kb": 0, 00:27:29.369 "state": "online", 00:27:29.369 "raid_level": "raid1", 00:27:29.369 "superblock": true, 00:27:29.369 "num_base_bdevs": 2, 00:27:29.369 "num_base_bdevs_discovered": 1, 00:27:29.369 "num_base_bdevs_operational": 1, 00:27:29.369 "base_bdevs_list": [ 00:27:29.369 { 00:27:29.369 "name": null, 00:27:29.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:29.369 "is_configured": false, 00:27:29.369 "data_offset": 0, 00:27:29.369 "data_size": 7936 00:27:29.369 }, 00:27:29.369 { 00:27:29.369 "name": "BaseBdev2", 00:27:29.369 "uuid": "78a4596b-a8c2-4aa2-8fe0-cd97b2635a95", 00:27:29.369 "is_configured": true, 00:27:29.369 "data_offset": 256, 00:27:29.369 "data_size": 7936 00:27:29.369 } 00:27:29.369 ] 00:27:29.369 }' 00:27:29.369 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:29.369 12:59:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:29.627 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:27:29.627 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:29.627 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:29.627 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:29.627 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.627 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:29.627 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.627 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:29.627 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:29.627 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:27:29.627 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.627 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:29.627 [2024-12-05 12:59:12.135049] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:29.627 [2024-12-05 12:59:12.135131] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:29.627 [2024-12-05 12:59:12.186007] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:29.627 [2024-12-05 12:59:12.186182] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:29.627 [2024-12-05 12:59:12.186200] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:27:29.627 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.627 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:29.627 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:29.627 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:29.627 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.627 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:29.627 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:27:29.627 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.884 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:27:29.884 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:27:29.884 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:27:29.884 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 84442 00:27:29.884 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 84442 ']' 00:27:29.884 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 84442 00:27:29.884 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:27:29.884 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:29.884 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84442 00:27:29.884 killing process with pid 84442 00:27:29.884 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:29.884 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:29.884 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84442' 00:27:29.884 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 84442 00:27:29.884 [2024-12-05 12:59:12.244203] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:29.884 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 84442 00:27:29.884 [2024-12-05 12:59:12.252710] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:30.449 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:27:30.449 00:27:30.449 real 0m3.545s 00:27:30.449 user 0m5.163s 00:27:30.449 sys 0m0.587s 00:27:30.449 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:30.449 ************************************ 00:27:30.449 END TEST raid_state_function_test_sb_md_separate 00:27:30.449 ************************************ 00:27:30.449 12:59:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:30.449 12:59:12 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:27:30.449 12:59:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:30.449 12:59:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:30.449 12:59:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:30.449 ************************************ 00:27:30.449 START TEST raid_superblock_test_md_separate 00:27:30.449 ************************************ 00:27:30.449 12:59:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:27:30.449 12:59:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:27:30.449 12:59:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:27:30.449 12:59:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:27:30.449 12:59:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:27:30.449 12:59:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:27:30.449 12:59:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:27:30.449 12:59:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:27:30.449 12:59:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:27:30.449 12:59:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:27:30.449 12:59:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:27:30.449 12:59:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:27:30.449 12:59:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:27:30.449 12:59:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:27:30.449 12:59:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:27:30.449 12:59:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:27:30.449 12:59:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=84678 00:27:30.450 12:59:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 84678 00:27:30.450 12:59:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 84678 ']' 00:27:30.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:30.450 12:59:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:30.450 12:59:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:27:30.450 12:59:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:30.450 12:59:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:30.450 12:59:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:30.450 12:59:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:30.450 [2024-12-05 12:59:12.944550] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:27:30.450 [2024-12-05 12:59:12.944862] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84678 ] 00:27:30.707 [2024-12-05 12:59:13.102800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.707 [2024-12-05 12:59:13.188043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.964 [2024-12-05 12:59:13.297821] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:30.964 [2024-12-05 12:59:13.297858] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:31.221 12:59:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:31.221 12:59:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:27:31.221 12:59:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:27:31.221 12:59:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:31.221 12:59:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:27:31.221 12:59:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:27:31.221 12:59:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:27:31.221 12:59:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:31.221 12:59:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:31.221 12:59:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:31.221 12:59:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:27:31.221 12:59:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.221 12:59:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:31.479 malloc1 00:27:31.479 12:59:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.479 12:59:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:31.479 12:59:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.479 12:59:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:31.479 [2024-12-05 12:59:13.825143] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:31.479 [2024-12-05 12:59:13.825188] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:31.479 [2024-12-05 12:59:13.825206] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:27:31.479 [2024-12-05 12:59:13.825215] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:31.479 [2024-12-05 12:59:13.826827] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:31.479 [2024-12-05 12:59:13.826855] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:31.479 pt1 00:27:31.479 12:59:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.479 12:59:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:31.479 12:59:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:31.479 12:59:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:27:31.479 12:59:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:27:31.479 12:59:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:27:31.479 12:59:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:31.479 12:59:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:31.479 12:59:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:31.479 12:59:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:27:31.479 12:59:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.479 12:59:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:31.479 malloc2 00:27:31.479 12:59:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.479 12:59:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:31.479 12:59:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.479 12:59:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:31.479 [2024-12-05 12:59:13.857572] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:31.479 [2024-12-05 12:59:13.857615] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:31.479 [2024-12-05 12:59:13.857629] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:31.479 [2024-12-05 12:59:13.857636] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:31.479 [2024-12-05 12:59:13.859192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:31.479 [2024-12-05 12:59:13.859220] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:31.479 pt2 00:27:31.479 12:59:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.479 12:59:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:31.480 12:59:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:31.480 12:59:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:27:31.480 12:59:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.480 12:59:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:31.480 [2024-12-05 12:59:13.865590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:31.480 [2024-12-05 12:59:13.867081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:31.480 [2024-12-05 12:59:13.867222] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:27:31.480 [2024-12-05 12:59:13.867232] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:31.480 [2024-12-05 12:59:13.867291] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:27:31.480 [2024-12-05 12:59:13.867380] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:27:31.480 [2024-12-05 12:59:13.867390] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:27:31.480 [2024-12-05 12:59:13.867466] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:31.480 12:59:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.480 12:59:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:31.480 12:59:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:31.480 12:59:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:31.480 12:59:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:31.480 12:59:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:31.480 12:59:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:31.480 12:59:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:31.480 12:59:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:31.480 12:59:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:31.480 12:59:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:31.480 12:59:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:31.480 12:59:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.480 12:59:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:31.480 12:59:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:31.480 12:59:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.480 12:59:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:31.480 "name": "raid_bdev1", 00:27:31.480 "uuid": "eebeaf54-1e04-4fda-bcff-1117bca4b3de", 00:27:31.480 "strip_size_kb": 0, 00:27:31.480 "state": "online", 00:27:31.480 "raid_level": "raid1", 00:27:31.480 "superblock": true, 00:27:31.480 "num_base_bdevs": 2, 00:27:31.480 "num_base_bdevs_discovered": 2, 00:27:31.480 "num_base_bdevs_operational": 2, 00:27:31.480 "base_bdevs_list": [ 00:27:31.480 { 00:27:31.480 "name": "pt1", 00:27:31.480 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:31.480 "is_configured": true, 00:27:31.480 "data_offset": 256, 00:27:31.480 "data_size": 7936 00:27:31.480 }, 00:27:31.480 { 00:27:31.480 "name": "pt2", 00:27:31.480 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:31.480 "is_configured": true, 00:27:31.480 "data_offset": 256, 00:27:31.480 "data_size": 7936 00:27:31.480 } 00:27:31.480 ] 00:27:31.480 }' 00:27:31.480 12:59:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:31.480 12:59:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:31.738 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:27:31.738 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:27:31.738 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:31.738 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:31.738 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:27:31.738 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:31.738 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:31.738 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:31.738 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.738 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:31.738 [2024-12-05 12:59:14.197903] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:31.738 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.738 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:31.738 "name": "raid_bdev1", 00:27:31.738 "aliases": [ 00:27:31.738 "eebeaf54-1e04-4fda-bcff-1117bca4b3de" 00:27:31.738 ], 00:27:31.738 "product_name": "Raid Volume", 00:27:31.738 "block_size": 4096, 00:27:31.738 "num_blocks": 7936, 00:27:31.738 "uuid": "eebeaf54-1e04-4fda-bcff-1117bca4b3de", 00:27:31.738 "md_size": 32, 00:27:31.738 "md_interleave": false, 00:27:31.738 "dif_type": 0, 00:27:31.738 "assigned_rate_limits": { 00:27:31.738 "rw_ios_per_sec": 0, 00:27:31.738 "rw_mbytes_per_sec": 0, 00:27:31.738 "r_mbytes_per_sec": 0, 00:27:31.738 "w_mbytes_per_sec": 0 00:27:31.738 }, 00:27:31.738 "claimed": false, 00:27:31.738 "zoned": false, 00:27:31.738 "supported_io_types": { 00:27:31.738 "read": true, 00:27:31.738 "write": true, 00:27:31.738 "unmap": false, 00:27:31.738 "flush": false, 00:27:31.738 "reset": true, 00:27:31.738 "nvme_admin": false, 00:27:31.738 "nvme_io": false, 00:27:31.738 "nvme_io_md": false, 00:27:31.738 "write_zeroes": true, 00:27:31.738 "zcopy": false, 00:27:31.738 "get_zone_info": false, 00:27:31.738 "zone_management": false, 00:27:31.738 "zone_append": false, 00:27:31.738 "compare": false, 00:27:31.738 "compare_and_write": false, 00:27:31.738 "abort": false, 00:27:31.738 "seek_hole": false, 00:27:31.738 "seek_data": false, 00:27:31.738 "copy": false, 00:27:31.738 "nvme_iov_md": false 00:27:31.738 }, 00:27:31.739 "memory_domains": [ 00:27:31.739 { 00:27:31.739 "dma_device_id": "system", 00:27:31.739 "dma_device_type": 1 00:27:31.739 }, 00:27:31.739 { 00:27:31.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:31.739 "dma_device_type": 2 00:27:31.739 }, 00:27:31.739 { 00:27:31.739 "dma_device_id": "system", 00:27:31.739 "dma_device_type": 1 00:27:31.739 }, 00:27:31.739 { 00:27:31.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:31.739 "dma_device_type": 2 00:27:31.739 } 00:27:31.739 ], 00:27:31.739 "driver_specific": { 00:27:31.739 "raid": { 00:27:31.739 "uuid": "eebeaf54-1e04-4fda-bcff-1117bca4b3de", 00:27:31.739 "strip_size_kb": 0, 00:27:31.739 "state": "online", 00:27:31.739 "raid_level": "raid1", 00:27:31.739 "superblock": true, 00:27:31.739 "num_base_bdevs": 2, 00:27:31.739 "num_base_bdevs_discovered": 2, 00:27:31.739 "num_base_bdevs_operational": 2, 00:27:31.739 "base_bdevs_list": [ 00:27:31.739 { 00:27:31.739 "name": "pt1", 00:27:31.739 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:31.739 "is_configured": true, 00:27:31.739 "data_offset": 256, 00:27:31.739 "data_size": 7936 00:27:31.739 }, 00:27:31.739 { 00:27:31.739 "name": "pt2", 00:27:31.739 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:31.739 "is_configured": true, 00:27:31.739 "data_offset": 256, 00:27:31.739 "data_size": 7936 00:27:31.739 } 00:27:31.739 ] 00:27:31.739 } 00:27:31.739 } 00:27:31.739 }' 00:27:31.739 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:31.739 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:27:31.739 pt2' 00:27:31.739 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:31.739 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:27:31.739 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:31.739 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:31.739 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:27:31.739 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.739 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:31.739 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.739 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:27:31.739 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:27:31.739 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:31.739 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:31.739 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:27:31.739 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.739 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:31.998 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.998 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:27:31.998 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:27:31.998 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:31.998 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.998 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:27:31.998 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:31.998 [2024-12-05 12:59:14.353919] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:31.998 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.998 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=eebeaf54-1e04-4fda-bcff-1117bca4b3de 00:27:31.998 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z eebeaf54-1e04-4fda-bcff-1117bca4b3de ']' 00:27:31.998 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:31.998 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.998 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:31.998 [2024-12-05 12:59:14.385675] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:31.998 [2024-12-05 12:59:14.385698] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:31.998 [2024-12-05 12:59:14.385768] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:31.998 [2024-12-05 12:59:14.385822] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:31.998 [2024-12-05 12:59:14.385832] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:27:31.998 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.998 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:31.998 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:31.999 [2024-12-05 12:59:14.485721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:27:31.999 [2024-12-05 12:59:14.487407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:27:31.999 [2024-12-05 12:59:14.487472] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:27:31.999 [2024-12-05 12:59:14.487535] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:27:31.999 [2024-12-05 12:59:14.487549] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:31.999 [2024-12-05 12:59:14.487558] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:27:31.999 request: 00:27:31.999 { 00:27:31.999 "name": "raid_bdev1", 00:27:31.999 "raid_level": "raid1", 00:27:31.999 "base_bdevs": [ 00:27:31.999 "malloc1", 00:27:31.999 "malloc2" 00:27:31.999 ], 00:27:31.999 "superblock": false, 00:27:31.999 "method": "bdev_raid_create", 00:27:31.999 "req_id": 1 00:27:31.999 } 00:27:31.999 Got JSON-RPC error response 00:27:31.999 response: 00:27:31.999 { 00:27:31.999 "code": -17, 00:27:31.999 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:27:31.999 } 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:31.999 [2024-12-05 12:59:14.529721] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:31.999 [2024-12-05 12:59:14.529782] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:31.999 [2024-12-05 12:59:14.529798] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:27:31.999 [2024-12-05 12:59:14.529808] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:31.999 [2024-12-05 12:59:14.531539] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:31.999 [2024-12-05 12:59:14.531573] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:31.999 [2024-12-05 12:59:14.531618] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:31.999 [2024-12-05 12:59:14.531663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:31.999 pt1 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:31.999 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:32.000 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.000 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:32.000 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:32.000 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.000 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:32.000 "name": "raid_bdev1", 00:27:32.000 "uuid": "eebeaf54-1e04-4fda-bcff-1117bca4b3de", 00:27:32.000 "strip_size_kb": 0, 00:27:32.000 "state": "configuring", 00:27:32.000 "raid_level": "raid1", 00:27:32.000 "superblock": true, 00:27:32.000 "num_base_bdevs": 2, 00:27:32.000 "num_base_bdevs_discovered": 1, 00:27:32.000 "num_base_bdevs_operational": 2, 00:27:32.000 "base_bdevs_list": [ 00:27:32.000 { 00:27:32.000 "name": "pt1", 00:27:32.000 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:32.000 "is_configured": true, 00:27:32.000 "data_offset": 256, 00:27:32.000 "data_size": 7936 00:27:32.000 }, 00:27:32.000 { 00:27:32.000 "name": null, 00:27:32.000 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:32.000 "is_configured": false, 00:27:32.000 "data_offset": 256, 00:27:32.000 "data_size": 7936 00:27:32.000 } 00:27:32.000 ] 00:27:32.000 }' 00:27:32.000 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:32.000 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:32.566 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:27:32.566 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:27:32.566 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:32.566 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:32.566 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.566 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:32.566 [2024-12-05 12:59:14.861770] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:32.566 [2024-12-05 12:59:14.861836] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:32.566 [2024-12-05 12:59:14.861854] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:27:32.566 [2024-12-05 12:59:14.861863] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:32.566 [2024-12-05 12:59:14.862050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:32.566 [2024-12-05 12:59:14.862064] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:32.566 [2024-12-05 12:59:14.862106] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:32.566 [2024-12-05 12:59:14.862124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:32.566 [2024-12-05 12:59:14.862214] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:32.566 [2024-12-05 12:59:14.862224] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:32.566 [2024-12-05 12:59:14.862283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:27:32.566 [2024-12-05 12:59:14.862376] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:32.567 [2024-12-05 12:59:14.862386] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:27:32.567 [2024-12-05 12:59:14.862465] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:32.567 pt2 00:27:32.567 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.567 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:27:32.567 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:32.567 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:32.567 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:32.567 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:32.567 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:32.567 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:32.567 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:32.567 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:32.567 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:32.567 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:32.567 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:32.567 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:32.567 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:32.567 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.567 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:32.567 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.567 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:32.567 "name": "raid_bdev1", 00:27:32.567 "uuid": "eebeaf54-1e04-4fda-bcff-1117bca4b3de", 00:27:32.567 "strip_size_kb": 0, 00:27:32.567 "state": "online", 00:27:32.567 "raid_level": "raid1", 00:27:32.567 "superblock": true, 00:27:32.567 "num_base_bdevs": 2, 00:27:32.567 "num_base_bdevs_discovered": 2, 00:27:32.567 "num_base_bdevs_operational": 2, 00:27:32.567 "base_bdevs_list": [ 00:27:32.567 { 00:27:32.567 "name": "pt1", 00:27:32.567 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:32.567 "is_configured": true, 00:27:32.567 "data_offset": 256, 00:27:32.567 "data_size": 7936 00:27:32.567 }, 00:27:32.567 { 00:27:32.567 "name": "pt2", 00:27:32.567 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:32.567 "is_configured": true, 00:27:32.567 "data_offset": 256, 00:27:32.567 "data_size": 7936 00:27:32.567 } 00:27:32.567 ] 00:27:32.567 }' 00:27:32.567 12:59:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:32.567 12:59:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:32.824 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:27:32.824 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:27:32.824 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:32.824 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:32.824 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:27:32.824 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:32.824 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:32.824 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:32.824 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.824 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:32.824 [2024-12-05 12:59:15.186076] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:32.824 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.824 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:32.824 "name": "raid_bdev1", 00:27:32.824 "aliases": [ 00:27:32.824 "eebeaf54-1e04-4fda-bcff-1117bca4b3de" 00:27:32.824 ], 00:27:32.824 "product_name": "Raid Volume", 00:27:32.824 "block_size": 4096, 00:27:32.824 "num_blocks": 7936, 00:27:32.824 "uuid": "eebeaf54-1e04-4fda-bcff-1117bca4b3de", 00:27:32.824 "md_size": 32, 00:27:32.824 "md_interleave": false, 00:27:32.824 "dif_type": 0, 00:27:32.824 "assigned_rate_limits": { 00:27:32.824 "rw_ios_per_sec": 0, 00:27:32.824 "rw_mbytes_per_sec": 0, 00:27:32.824 "r_mbytes_per_sec": 0, 00:27:32.824 "w_mbytes_per_sec": 0 00:27:32.824 }, 00:27:32.824 "claimed": false, 00:27:32.824 "zoned": false, 00:27:32.824 "supported_io_types": { 00:27:32.824 "read": true, 00:27:32.824 "write": true, 00:27:32.824 "unmap": false, 00:27:32.824 "flush": false, 00:27:32.824 "reset": true, 00:27:32.824 "nvme_admin": false, 00:27:32.824 "nvme_io": false, 00:27:32.824 "nvme_io_md": false, 00:27:32.824 "write_zeroes": true, 00:27:32.824 "zcopy": false, 00:27:32.824 "get_zone_info": false, 00:27:32.824 "zone_management": false, 00:27:32.824 "zone_append": false, 00:27:32.824 "compare": false, 00:27:32.824 "compare_and_write": false, 00:27:32.824 "abort": false, 00:27:32.824 "seek_hole": false, 00:27:32.824 "seek_data": false, 00:27:32.824 "copy": false, 00:27:32.824 "nvme_iov_md": false 00:27:32.824 }, 00:27:32.824 "memory_domains": [ 00:27:32.824 { 00:27:32.824 "dma_device_id": "system", 00:27:32.824 "dma_device_type": 1 00:27:32.824 }, 00:27:32.824 { 00:27:32.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:32.824 "dma_device_type": 2 00:27:32.824 }, 00:27:32.824 { 00:27:32.824 "dma_device_id": "system", 00:27:32.824 "dma_device_type": 1 00:27:32.824 }, 00:27:32.824 { 00:27:32.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:32.824 "dma_device_type": 2 00:27:32.824 } 00:27:32.824 ], 00:27:32.824 "driver_specific": { 00:27:32.824 "raid": { 00:27:32.824 "uuid": "eebeaf54-1e04-4fda-bcff-1117bca4b3de", 00:27:32.824 "strip_size_kb": 0, 00:27:32.824 "state": "online", 00:27:32.824 "raid_level": "raid1", 00:27:32.824 "superblock": true, 00:27:32.824 "num_base_bdevs": 2, 00:27:32.824 "num_base_bdevs_discovered": 2, 00:27:32.824 "num_base_bdevs_operational": 2, 00:27:32.824 "base_bdevs_list": [ 00:27:32.824 { 00:27:32.824 "name": "pt1", 00:27:32.824 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:32.824 "is_configured": true, 00:27:32.825 "data_offset": 256, 00:27:32.825 "data_size": 7936 00:27:32.825 }, 00:27:32.825 { 00:27:32.825 "name": "pt2", 00:27:32.825 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:32.825 "is_configured": true, 00:27:32.825 "data_offset": 256, 00:27:32.825 "data_size": 7936 00:27:32.825 } 00:27:32.825 ] 00:27:32.825 } 00:27:32.825 } 00:27:32.825 }' 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:27:32.825 pt2' 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:27:32.825 [2024-12-05 12:59:15.338116] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' eebeaf54-1e04-4fda-bcff-1117bca4b3de '!=' eebeaf54-1e04-4fda-bcff-1117bca4b3de ']' 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:32.825 [2024-12-05 12:59:15.369916] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:32.825 "name": "raid_bdev1", 00:27:32.825 "uuid": "eebeaf54-1e04-4fda-bcff-1117bca4b3de", 00:27:32.825 "strip_size_kb": 0, 00:27:32.825 "state": "online", 00:27:32.825 "raid_level": "raid1", 00:27:32.825 "superblock": true, 00:27:32.825 "num_base_bdevs": 2, 00:27:32.825 "num_base_bdevs_discovered": 1, 00:27:32.825 "num_base_bdevs_operational": 1, 00:27:32.825 "base_bdevs_list": [ 00:27:32.825 { 00:27:32.825 "name": null, 00:27:32.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:32.825 "is_configured": false, 00:27:32.825 "data_offset": 0, 00:27:32.825 "data_size": 7936 00:27:32.825 }, 00:27:32.825 { 00:27:32.825 "name": "pt2", 00:27:32.825 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:32.825 "is_configured": true, 00:27:32.825 "data_offset": 256, 00:27:32.825 "data_size": 7936 00:27:32.825 } 00:27:32.825 ] 00:27:32.825 }' 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:32.825 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:33.388 [2024-12-05 12:59:15.717953] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:33.388 [2024-12-05 12:59:15.717980] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:33.388 [2024-12-05 12:59:15.718040] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:33.388 [2024-12-05 12:59:15.718082] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:33.388 [2024-12-05 12:59:15.718092] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:33.388 [2024-12-05 12:59:15.765960] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:33.388 [2024-12-05 12:59:15.766020] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:33.388 [2024-12-05 12:59:15.766035] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:27:33.388 [2024-12-05 12:59:15.766044] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:33.388 [2024-12-05 12:59:15.767827] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:33.388 [2024-12-05 12:59:15.767961] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:33.388 [2024-12-05 12:59:15.768015] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:33.388 [2024-12-05 12:59:15.768056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:33.388 [2024-12-05 12:59:15.768135] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:27:33.388 [2024-12-05 12:59:15.768147] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:33.388 [2024-12-05 12:59:15.768213] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:33.388 [2024-12-05 12:59:15.768299] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:27:33.388 [2024-12-05 12:59:15.768306] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:27:33.388 [2024-12-05 12:59:15.768381] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:33.388 pt2 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:33.388 "name": "raid_bdev1", 00:27:33.388 "uuid": "eebeaf54-1e04-4fda-bcff-1117bca4b3de", 00:27:33.388 "strip_size_kb": 0, 00:27:33.388 "state": "online", 00:27:33.388 "raid_level": "raid1", 00:27:33.388 "superblock": true, 00:27:33.388 "num_base_bdevs": 2, 00:27:33.388 "num_base_bdevs_discovered": 1, 00:27:33.388 "num_base_bdevs_operational": 1, 00:27:33.388 "base_bdevs_list": [ 00:27:33.388 { 00:27:33.388 "name": null, 00:27:33.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:33.388 "is_configured": false, 00:27:33.388 "data_offset": 256, 00:27:33.388 "data_size": 7936 00:27:33.388 }, 00:27:33.388 { 00:27:33.388 "name": "pt2", 00:27:33.388 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:33.388 "is_configured": true, 00:27:33.388 "data_offset": 256, 00:27:33.388 "data_size": 7936 00:27:33.388 } 00:27:33.388 ] 00:27:33.388 }' 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:33.388 12:59:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:33.646 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:33.646 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.646 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:33.646 [2024-12-05 12:59:16.074018] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:33.646 [2024-12-05 12:59:16.074046] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:33.646 [2024-12-05 12:59:16.074105] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:33.646 [2024-12-05 12:59:16.074152] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:33.646 [2024-12-05 12:59:16.074160] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:27:33.646 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.646 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:33.646 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.646 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:27:33.646 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:33.646 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.646 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:27:33.646 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:27:33.646 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:27:33.646 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:33.646 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.646 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:33.646 [2024-12-05 12:59:16.114046] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:33.646 [2024-12-05 12:59:16.114187] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:33.646 [2024-12-05 12:59:16.114210] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:27:33.646 [2024-12-05 12:59:16.114218] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:33.646 [2024-12-05 12:59:16.115969] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:33.646 [2024-12-05 12:59:16.116003] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:33.646 [2024-12-05 12:59:16.116050] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:33.646 [2024-12-05 12:59:16.116084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:33.646 [2024-12-05 12:59:16.116185] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:27:33.646 [2024-12-05 12:59:16.116193] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:33.646 [2024-12-05 12:59:16.116207] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:27:33.646 [2024-12-05 12:59:16.116254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:33.646 [2024-12-05 12:59:16.116309] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:27:33.646 [2024-12-05 12:59:16.116317] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:33.646 [2024-12-05 12:59:16.116372] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:27:33.646 [2024-12-05 12:59:16.116452] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:27:33.646 [2024-12-05 12:59:16.116465] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:27:33.646 [2024-12-05 12:59:16.116557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:33.646 pt1 00:27:33.646 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.646 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:27:33.646 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:33.646 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:33.646 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:33.646 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:33.646 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:33.646 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:33.646 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:33.646 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:33.646 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:33.646 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:33.646 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:33.646 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:33.646 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.646 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:33.646 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.646 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:33.646 "name": "raid_bdev1", 00:27:33.646 "uuid": "eebeaf54-1e04-4fda-bcff-1117bca4b3de", 00:27:33.646 "strip_size_kb": 0, 00:27:33.646 "state": "online", 00:27:33.646 "raid_level": "raid1", 00:27:33.646 "superblock": true, 00:27:33.646 "num_base_bdevs": 2, 00:27:33.646 "num_base_bdevs_discovered": 1, 00:27:33.646 "num_base_bdevs_operational": 1, 00:27:33.646 "base_bdevs_list": [ 00:27:33.646 { 00:27:33.646 "name": null, 00:27:33.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:33.646 "is_configured": false, 00:27:33.646 "data_offset": 256, 00:27:33.646 "data_size": 7936 00:27:33.646 }, 00:27:33.646 { 00:27:33.646 "name": "pt2", 00:27:33.646 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:33.646 "is_configured": true, 00:27:33.646 "data_offset": 256, 00:27:33.646 "data_size": 7936 00:27:33.646 } 00:27:33.646 ] 00:27:33.646 }' 00:27:33.646 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:33.646 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:33.904 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:27:33.904 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:27:33.904 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.904 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:33.904 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.904 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:27:33.904 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:27:33.904 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:33.904 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.904 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:33.905 [2024-12-05 12:59:16.478321] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:34.162 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.162 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' eebeaf54-1e04-4fda-bcff-1117bca4b3de '!=' eebeaf54-1e04-4fda-bcff-1117bca4b3de ']' 00:27:34.162 12:59:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 84678 00:27:34.162 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 84678 ']' 00:27:34.162 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 84678 00:27:34.162 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:27:34.162 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:34.162 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84678 00:27:34.162 killing process with pid 84678 00:27:34.162 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:34.162 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:34.162 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84678' 00:27:34.162 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 84678 00:27:34.162 [2024-12-05 12:59:16.534237] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:34.162 12:59:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 84678 00:27:34.162 [2024-12-05 12:59:16.534311] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:34.162 [2024-12-05 12:59:16.534352] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:34.162 [2024-12-05 12:59:16.534368] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:27:34.162 [2024-12-05 12:59:16.645243] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:34.726 12:59:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:27:34.726 00:27:34.726 real 0m4.354s 00:27:34.726 user 0m6.668s 00:27:34.726 sys 0m0.748s 00:27:34.726 ************************************ 00:27:34.726 END TEST raid_superblock_test_md_separate 00:27:34.726 ************************************ 00:27:34.726 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:34.726 12:59:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:34.726 12:59:17 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:27:34.726 12:59:17 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:27:34.726 12:59:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:27:34.726 12:59:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:34.726 12:59:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:34.726 ************************************ 00:27:34.726 START TEST raid_rebuild_test_sb_md_separate 00:27:34.726 ************************************ 00:27:34.726 12:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:27:34.726 12:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:27:34.726 12:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:27:34.726 12:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:27:34.726 12:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:27:34.726 12:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:27:34.726 12:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:27:34.726 12:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:34.726 12:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:27:34.726 12:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:27:34.726 12:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:34.726 12:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:27:34.726 12:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:27:34.726 12:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:34.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:34.726 12:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:34.726 12:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:27:34.726 12:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:27:34.726 12:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:27:34.726 12:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:27:34.726 12:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:27:34.726 12:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:27:34.726 12:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:27:34.726 12:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:27:34.726 12:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:27:34.726 12:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:27:34.726 12:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=84984 00:27:34.726 12:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 84984 00:27:34.726 12:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 84984 ']' 00:27:34.726 12:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:34.726 12:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:34.726 12:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:34.726 12:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:34.726 12:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:34.726 12:59:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:34.984 [2024-12-05 12:59:17.348824] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:27:34.984 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:34.984 Zero copy mechanism will not be used. 00:27:34.984 [2024-12-05 12:59:17.349080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84984 ] 00:27:34.984 [2024-12-05 12:59:17.507292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.239 [2024-12-05 12:59:17.608538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.239 [2024-12-05 12:59:17.745823] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:35.239 [2024-12-05 12:59:17.745881] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:35.801 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:35.801 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:27:35.801 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:27:35.801 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:27:35.801 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.801 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:35.801 BaseBdev1_malloc 00:27:35.801 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.801 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:35.801 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.801 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:35.801 [2024-12-05 12:59:18.221485] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:35.801 [2024-12-05 12:59:18.221553] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:35.801 [2024-12-05 12:59:18.221573] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:27:35.801 [2024-12-05 12:59:18.221585] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:35.801 [2024-12-05 12:59:18.223462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:35.801 [2024-12-05 12:59:18.223623] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:35.801 BaseBdev1 00:27:35.801 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.801 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:27:35.801 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:27:35.801 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.801 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:35.801 BaseBdev2_malloc 00:27:35.801 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.801 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:35.801 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.801 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:35.801 [2024-12-05 12:59:18.257630] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:35.801 [2024-12-05 12:59:18.257679] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:35.801 [2024-12-05 12:59:18.257696] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:35.801 [2024-12-05 12:59:18.257708] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:35.801 [2024-12-05 12:59:18.259571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:35.801 [2024-12-05 12:59:18.259602] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:35.801 BaseBdev2 00:27:35.801 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.801 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:27:35.801 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.801 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:35.801 spare_malloc 00:27:35.801 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.801 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:35.801 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.801 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:35.801 spare_delay 00:27:35.801 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.801 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:27:35.801 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.801 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:35.801 [2024-12-05 12:59:18.314765] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:35.801 [2024-12-05 12:59:18.314817] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:35.801 [2024-12-05 12:59:18.314838] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:27:35.801 [2024-12-05 12:59:18.314848] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:35.801 [2024-12-05 12:59:18.316758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:35.801 [2024-12-05 12:59:18.316792] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:35.801 spare 00:27:35.801 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.802 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:27:35.802 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.802 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:35.802 [2024-12-05 12:59:18.322811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:35.802 [2024-12-05 12:59:18.324614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:35.802 [2024-12-05 12:59:18.324785] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:27:35.802 [2024-12-05 12:59:18.324799] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:35.802 [2024-12-05 12:59:18.324873] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:27:35.802 [2024-12-05 12:59:18.324988] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:27:35.802 [2024-12-05 12:59:18.324998] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:27:35.802 [2024-12-05 12:59:18.325090] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:35.802 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.802 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:35.802 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:35.802 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:35.802 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:35.802 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:35.802 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:35.802 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:35.802 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:35.802 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:35.802 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:35.802 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:35.802 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:35.802 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.802 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:35.802 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.802 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:35.802 "name": "raid_bdev1", 00:27:35.802 "uuid": "79af873b-19c4-4266-a4f5-b825592df6a6", 00:27:35.802 "strip_size_kb": 0, 00:27:35.802 "state": "online", 00:27:35.802 "raid_level": "raid1", 00:27:35.802 "superblock": true, 00:27:35.802 "num_base_bdevs": 2, 00:27:35.802 "num_base_bdevs_discovered": 2, 00:27:35.802 "num_base_bdevs_operational": 2, 00:27:35.802 "base_bdevs_list": [ 00:27:35.802 { 00:27:35.802 "name": "BaseBdev1", 00:27:35.802 "uuid": "d7264447-077e-5123-8958-5d171dd65a35", 00:27:35.802 "is_configured": true, 00:27:35.802 "data_offset": 256, 00:27:35.802 "data_size": 7936 00:27:35.802 }, 00:27:35.802 { 00:27:35.802 "name": "BaseBdev2", 00:27:35.802 "uuid": "02d9911f-11eb-5fff-a8aa-e530cfaa4013", 00:27:35.802 "is_configured": true, 00:27:35.802 "data_offset": 256, 00:27:35.802 "data_size": 7936 00:27:35.802 } 00:27:35.802 ] 00:27:35.802 }' 00:27:35.802 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:35.802 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:36.085 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:27:36.085 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:36.085 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.085 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:36.085 [2024-12-05 12:59:18.631158] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:36.085 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.085 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:27:36.085 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:36.085 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.085 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:36.085 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:36.085 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.342 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:27:36.342 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:27:36.342 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:27:36.342 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:27:36.342 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:27:36.342 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:27:36.342 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:27:36.342 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:36.342 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:36.342 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:36.342 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:27:36.342 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:36.342 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:36.342 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:27:36.342 [2024-12-05 12:59:18.882977] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:27:36.342 /dev/nbd0 00:27:36.342 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:36.342 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:36.342 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:27:36.342 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:27:36.342 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:36.342 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:36.342 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:27:36.342 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:27:36.342 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:36.342 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:36.342 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:36.342 1+0 records in 00:27:36.342 1+0 records out 00:27:36.342 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243873 s, 16.8 MB/s 00:27:36.342 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:36.599 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:27:36.599 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:36.599 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:36.599 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:27:36.599 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:36.599 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:36.599 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:27:36.599 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:27:36.599 12:59:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:27:37.163 7936+0 records in 00:27:37.163 7936+0 records out 00:27:37.163 32505856 bytes (33 MB, 31 MiB) copied, 0.659045 s, 49.3 MB/s 00:27:37.163 12:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:27:37.163 12:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:27:37.163 12:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:37.163 12:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:37.163 12:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:27:37.163 12:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:37.163 12:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:27:37.421 12:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:37.421 12:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:37.421 [2024-12-05 12:59:19.823625] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:37.421 12:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:37.421 12:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:37.421 12:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:37.421 12:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:37.421 12:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:27:37.421 12:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:27:37.421 12:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:27:37.421 12:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.421 12:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:37.421 [2024-12-05 12:59:19.839721] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:37.421 12:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.421 12:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:37.421 12:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:37.421 12:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:37.421 12:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:37.421 12:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:37.421 12:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:37.421 12:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:37.421 12:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:37.421 12:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:37.421 12:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:37.422 12:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:37.422 12:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:37.422 12:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.422 12:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:37.422 12:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.422 12:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:37.422 "name": "raid_bdev1", 00:27:37.422 "uuid": "79af873b-19c4-4266-a4f5-b825592df6a6", 00:27:37.422 "strip_size_kb": 0, 00:27:37.422 "state": "online", 00:27:37.422 "raid_level": "raid1", 00:27:37.422 "superblock": true, 00:27:37.422 "num_base_bdevs": 2, 00:27:37.422 "num_base_bdevs_discovered": 1, 00:27:37.422 "num_base_bdevs_operational": 1, 00:27:37.422 "base_bdevs_list": [ 00:27:37.422 { 00:27:37.422 "name": null, 00:27:37.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:37.422 "is_configured": false, 00:27:37.422 "data_offset": 0, 00:27:37.422 "data_size": 7936 00:27:37.422 }, 00:27:37.422 { 00:27:37.422 "name": "BaseBdev2", 00:27:37.422 "uuid": "02d9911f-11eb-5fff-a8aa-e530cfaa4013", 00:27:37.422 "is_configured": true, 00:27:37.422 "data_offset": 256, 00:27:37.422 "data_size": 7936 00:27:37.422 } 00:27:37.422 ] 00:27:37.422 }' 00:27:37.422 12:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:37.422 12:59:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:37.680 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:27:37.680 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.680 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:37.680 [2024-12-05 12:59:20.147799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:37.680 [2024-12-05 12:59:20.157636] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:27:37.680 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.680 12:59:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:27:37.680 [2024-12-05 12:59:20.159467] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:38.615 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:38.615 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:38.615 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:38.615 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:38.615 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:38.615 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:38.615 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:38.615 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.615 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:38.615 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.615 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:38.615 "name": "raid_bdev1", 00:27:38.615 "uuid": "79af873b-19c4-4266-a4f5-b825592df6a6", 00:27:38.615 "strip_size_kb": 0, 00:27:38.615 "state": "online", 00:27:38.615 "raid_level": "raid1", 00:27:38.615 "superblock": true, 00:27:38.615 "num_base_bdevs": 2, 00:27:38.615 "num_base_bdevs_discovered": 2, 00:27:38.615 "num_base_bdevs_operational": 2, 00:27:38.615 "process": { 00:27:38.615 "type": "rebuild", 00:27:38.615 "target": "spare", 00:27:38.615 "progress": { 00:27:38.615 "blocks": 2560, 00:27:38.615 "percent": 32 00:27:38.615 } 00:27:38.615 }, 00:27:38.615 "base_bdevs_list": [ 00:27:38.615 { 00:27:38.615 "name": "spare", 00:27:38.615 "uuid": "74aa611e-b732-5b56-a4f8-8daa3bf44a66", 00:27:38.615 "is_configured": true, 00:27:38.615 "data_offset": 256, 00:27:38.615 "data_size": 7936 00:27:38.615 }, 00:27:38.615 { 00:27:38.615 "name": "BaseBdev2", 00:27:38.615 "uuid": "02d9911f-11eb-5fff-a8aa-e530cfaa4013", 00:27:38.615 "is_configured": true, 00:27:38.615 "data_offset": 256, 00:27:38.615 "data_size": 7936 00:27:38.615 } 00:27:38.615 ] 00:27:38.615 }' 00:27:38.615 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:38.873 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:38.873 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:38.873 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:38.873 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:27:38.873 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.873 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:38.874 [2024-12-05 12:59:21.257526] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:38.874 [2024-12-05 12:59:21.264873] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:38.874 [2024-12-05 12:59:21.264935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:38.874 [2024-12-05 12:59:21.264950] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:38.874 [2024-12-05 12:59:21.264962] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:38.874 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.874 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:38.874 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:38.874 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:38.874 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:38.874 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:38.874 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:38.874 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:38.874 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:38.874 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:38.874 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:38.874 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:38.874 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.874 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:38.874 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:38.874 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.874 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:38.874 "name": "raid_bdev1", 00:27:38.874 "uuid": "79af873b-19c4-4266-a4f5-b825592df6a6", 00:27:38.874 "strip_size_kb": 0, 00:27:38.874 "state": "online", 00:27:38.874 "raid_level": "raid1", 00:27:38.874 "superblock": true, 00:27:38.874 "num_base_bdevs": 2, 00:27:38.874 "num_base_bdevs_discovered": 1, 00:27:38.874 "num_base_bdevs_operational": 1, 00:27:38.874 "base_bdevs_list": [ 00:27:38.874 { 00:27:38.874 "name": null, 00:27:38.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:38.874 "is_configured": false, 00:27:38.874 "data_offset": 0, 00:27:38.874 "data_size": 7936 00:27:38.874 }, 00:27:38.874 { 00:27:38.874 "name": "BaseBdev2", 00:27:38.874 "uuid": "02d9911f-11eb-5fff-a8aa-e530cfaa4013", 00:27:38.874 "is_configured": true, 00:27:38.874 "data_offset": 256, 00:27:38.874 "data_size": 7936 00:27:38.874 } 00:27:38.874 ] 00:27:38.874 }' 00:27:38.874 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:38.874 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:39.132 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:39.132 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:39.132 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:39.132 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:39.132 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:39.132 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:39.132 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:39.132 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.132 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:39.132 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.132 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:39.132 "name": "raid_bdev1", 00:27:39.132 "uuid": "79af873b-19c4-4266-a4f5-b825592df6a6", 00:27:39.132 "strip_size_kb": 0, 00:27:39.132 "state": "online", 00:27:39.132 "raid_level": "raid1", 00:27:39.132 "superblock": true, 00:27:39.132 "num_base_bdevs": 2, 00:27:39.132 "num_base_bdevs_discovered": 1, 00:27:39.132 "num_base_bdevs_operational": 1, 00:27:39.132 "base_bdevs_list": [ 00:27:39.132 { 00:27:39.132 "name": null, 00:27:39.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:39.132 "is_configured": false, 00:27:39.132 "data_offset": 0, 00:27:39.132 "data_size": 7936 00:27:39.132 }, 00:27:39.132 { 00:27:39.132 "name": "BaseBdev2", 00:27:39.132 "uuid": "02d9911f-11eb-5fff-a8aa-e530cfaa4013", 00:27:39.132 "is_configured": true, 00:27:39.132 "data_offset": 256, 00:27:39.132 "data_size": 7936 00:27:39.132 } 00:27:39.132 ] 00:27:39.132 }' 00:27:39.132 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:39.132 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:39.132 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:39.132 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:39.132 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:27:39.132 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.132 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:39.132 [2024-12-05 12:59:21.695097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:39.132 [2024-12-05 12:59:21.704406] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:27:39.132 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.132 12:59:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:27:39.132 [2024-12-05 12:59:21.706328] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:40.505 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:40.505 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:40.505 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:40.505 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:40.505 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:40.505 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:40.505 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:40.505 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.505 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:40.505 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.505 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:40.505 "name": "raid_bdev1", 00:27:40.505 "uuid": "79af873b-19c4-4266-a4f5-b825592df6a6", 00:27:40.505 "strip_size_kb": 0, 00:27:40.505 "state": "online", 00:27:40.505 "raid_level": "raid1", 00:27:40.505 "superblock": true, 00:27:40.505 "num_base_bdevs": 2, 00:27:40.505 "num_base_bdevs_discovered": 2, 00:27:40.505 "num_base_bdevs_operational": 2, 00:27:40.505 "process": { 00:27:40.505 "type": "rebuild", 00:27:40.505 "target": "spare", 00:27:40.505 "progress": { 00:27:40.505 "blocks": 2560, 00:27:40.505 "percent": 32 00:27:40.505 } 00:27:40.505 }, 00:27:40.505 "base_bdevs_list": [ 00:27:40.505 { 00:27:40.505 "name": "spare", 00:27:40.505 "uuid": "74aa611e-b732-5b56-a4f8-8daa3bf44a66", 00:27:40.505 "is_configured": true, 00:27:40.505 "data_offset": 256, 00:27:40.505 "data_size": 7936 00:27:40.505 }, 00:27:40.505 { 00:27:40.505 "name": "BaseBdev2", 00:27:40.505 "uuid": "02d9911f-11eb-5fff-a8aa-e530cfaa4013", 00:27:40.505 "is_configured": true, 00:27:40.505 "data_offset": 256, 00:27:40.505 "data_size": 7936 00:27:40.505 } 00:27:40.505 ] 00:27:40.505 }' 00:27:40.505 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:40.505 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:40.505 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:40.505 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:40.505 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:27:40.505 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:27:40.505 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:27:40.505 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:27:40.505 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:27:40.505 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:27:40.505 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=555 00:27:40.505 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:40.505 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:40.505 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:40.505 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:40.505 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:40.505 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:40.505 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:40.505 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.505 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:40.505 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:40.505 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.505 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:40.505 "name": "raid_bdev1", 00:27:40.505 "uuid": "79af873b-19c4-4266-a4f5-b825592df6a6", 00:27:40.505 "strip_size_kb": 0, 00:27:40.505 "state": "online", 00:27:40.505 "raid_level": "raid1", 00:27:40.505 "superblock": true, 00:27:40.505 "num_base_bdevs": 2, 00:27:40.505 "num_base_bdevs_discovered": 2, 00:27:40.505 "num_base_bdevs_operational": 2, 00:27:40.505 "process": { 00:27:40.505 "type": "rebuild", 00:27:40.505 "target": "spare", 00:27:40.505 "progress": { 00:27:40.505 "blocks": 2560, 00:27:40.505 "percent": 32 00:27:40.505 } 00:27:40.505 }, 00:27:40.505 "base_bdevs_list": [ 00:27:40.505 { 00:27:40.505 "name": "spare", 00:27:40.505 "uuid": "74aa611e-b732-5b56-a4f8-8daa3bf44a66", 00:27:40.505 "is_configured": true, 00:27:40.505 "data_offset": 256, 00:27:40.505 "data_size": 7936 00:27:40.505 }, 00:27:40.505 { 00:27:40.505 "name": "BaseBdev2", 00:27:40.505 "uuid": "02d9911f-11eb-5fff-a8aa-e530cfaa4013", 00:27:40.505 "is_configured": true, 00:27:40.505 "data_offset": 256, 00:27:40.505 "data_size": 7936 00:27:40.505 } 00:27:40.505 ] 00:27:40.505 }' 00:27:40.505 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:40.505 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:40.505 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:40.505 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:40.505 12:59:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:41.497 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:41.497 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:41.497 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:41.497 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:41.497 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:41.497 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:41.497 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:41.497 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.497 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:41.497 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:41.497 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.497 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:41.497 "name": "raid_bdev1", 00:27:41.497 "uuid": "79af873b-19c4-4266-a4f5-b825592df6a6", 00:27:41.497 "strip_size_kb": 0, 00:27:41.497 "state": "online", 00:27:41.497 "raid_level": "raid1", 00:27:41.497 "superblock": true, 00:27:41.497 "num_base_bdevs": 2, 00:27:41.497 "num_base_bdevs_discovered": 2, 00:27:41.497 "num_base_bdevs_operational": 2, 00:27:41.497 "process": { 00:27:41.497 "type": "rebuild", 00:27:41.497 "target": "spare", 00:27:41.497 "progress": { 00:27:41.497 "blocks": 5376, 00:27:41.497 "percent": 67 00:27:41.497 } 00:27:41.497 }, 00:27:41.497 "base_bdevs_list": [ 00:27:41.497 { 00:27:41.497 "name": "spare", 00:27:41.497 "uuid": "74aa611e-b732-5b56-a4f8-8daa3bf44a66", 00:27:41.497 "is_configured": true, 00:27:41.497 "data_offset": 256, 00:27:41.497 "data_size": 7936 00:27:41.497 }, 00:27:41.497 { 00:27:41.497 "name": "BaseBdev2", 00:27:41.497 "uuid": "02d9911f-11eb-5fff-a8aa-e530cfaa4013", 00:27:41.497 "is_configured": true, 00:27:41.497 "data_offset": 256, 00:27:41.497 "data_size": 7936 00:27:41.497 } 00:27:41.497 ] 00:27:41.497 }' 00:27:41.497 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:41.497 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:41.497 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:41.497 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:41.497 12:59:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:42.432 [2024-12-05 12:59:24.820554] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:42.432 [2024-12-05 12:59:24.820625] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:42.432 [2024-12-05 12:59:24.820737] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:42.432 12:59:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:42.432 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:42.432 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:42.432 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:42.432 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:42.432 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:42.432 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:42.432 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.432 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:42.432 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:42.432 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.689 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:42.689 "name": "raid_bdev1", 00:27:42.690 "uuid": "79af873b-19c4-4266-a4f5-b825592df6a6", 00:27:42.690 "strip_size_kb": 0, 00:27:42.690 "state": "online", 00:27:42.690 "raid_level": "raid1", 00:27:42.690 "superblock": true, 00:27:42.690 "num_base_bdevs": 2, 00:27:42.690 "num_base_bdevs_discovered": 2, 00:27:42.690 "num_base_bdevs_operational": 2, 00:27:42.690 "base_bdevs_list": [ 00:27:42.690 { 00:27:42.690 "name": "spare", 00:27:42.690 "uuid": "74aa611e-b732-5b56-a4f8-8daa3bf44a66", 00:27:42.690 "is_configured": true, 00:27:42.690 "data_offset": 256, 00:27:42.690 "data_size": 7936 00:27:42.690 }, 00:27:42.690 { 00:27:42.690 "name": "BaseBdev2", 00:27:42.690 "uuid": "02d9911f-11eb-5fff-a8aa-e530cfaa4013", 00:27:42.690 "is_configured": true, 00:27:42.690 "data_offset": 256, 00:27:42.690 "data_size": 7936 00:27:42.690 } 00:27:42.690 ] 00:27:42.690 }' 00:27:42.690 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:42.690 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:42.690 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:42.690 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:27:42.690 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:27:42.690 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:42.690 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:42.690 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:42.690 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:42.690 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:42.690 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:42.690 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.690 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:42.690 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:42.690 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.690 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:42.690 "name": "raid_bdev1", 00:27:42.690 "uuid": "79af873b-19c4-4266-a4f5-b825592df6a6", 00:27:42.690 "strip_size_kb": 0, 00:27:42.690 "state": "online", 00:27:42.690 "raid_level": "raid1", 00:27:42.690 "superblock": true, 00:27:42.690 "num_base_bdevs": 2, 00:27:42.690 "num_base_bdevs_discovered": 2, 00:27:42.690 "num_base_bdevs_operational": 2, 00:27:42.690 "base_bdevs_list": [ 00:27:42.690 { 00:27:42.690 "name": "spare", 00:27:42.690 "uuid": "74aa611e-b732-5b56-a4f8-8daa3bf44a66", 00:27:42.690 "is_configured": true, 00:27:42.690 "data_offset": 256, 00:27:42.690 "data_size": 7936 00:27:42.690 }, 00:27:42.690 { 00:27:42.690 "name": "BaseBdev2", 00:27:42.690 "uuid": "02d9911f-11eb-5fff-a8aa-e530cfaa4013", 00:27:42.690 "is_configured": true, 00:27:42.690 "data_offset": 256, 00:27:42.690 "data_size": 7936 00:27:42.690 } 00:27:42.690 ] 00:27:42.690 }' 00:27:42.690 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:42.690 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:42.690 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:42.690 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:42.690 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:42.690 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:42.690 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:42.690 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:42.690 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:42.690 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:42.690 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:42.690 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:42.690 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:42.690 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:42.690 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:42.690 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.690 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:42.690 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:42.690 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.690 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:42.690 "name": "raid_bdev1", 00:27:42.690 "uuid": "79af873b-19c4-4266-a4f5-b825592df6a6", 00:27:42.690 "strip_size_kb": 0, 00:27:42.690 "state": "online", 00:27:42.690 "raid_level": "raid1", 00:27:42.690 "superblock": true, 00:27:42.690 "num_base_bdevs": 2, 00:27:42.690 "num_base_bdevs_discovered": 2, 00:27:42.690 "num_base_bdevs_operational": 2, 00:27:42.690 "base_bdevs_list": [ 00:27:42.690 { 00:27:42.690 "name": "spare", 00:27:42.690 "uuid": "74aa611e-b732-5b56-a4f8-8daa3bf44a66", 00:27:42.690 "is_configured": true, 00:27:42.690 "data_offset": 256, 00:27:42.690 "data_size": 7936 00:27:42.690 }, 00:27:42.690 { 00:27:42.690 "name": "BaseBdev2", 00:27:42.690 "uuid": "02d9911f-11eb-5fff-a8aa-e530cfaa4013", 00:27:42.690 "is_configured": true, 00:27:42.690 "data_offset": 256, 00:27:42.690 "data_size": 7936 00:27:42.690 } 00:27:42.690 ] 00:27:42.690 }' 00:27:42.690 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:42.690 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:43.256 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:43.256 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.256 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:43.256 [2024-12-05 12:59:25.553078] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:43.256 [2024-12-05 12:59:25.553105] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:43.256 [2024-12-05 12:59:25.553169] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:43.256 [2024-12-05 12:59:25.553230] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:43.256 [2024-12-05 12:59:25.553239] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:27:43.256 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.256 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:43.256 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.256 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:43.256 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:27:43.256 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.256 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:27:43.256 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:27:43.256 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:27:43.256 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:43.256 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:27:43.256 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:27:43.256 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:43.256 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:43.256 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:43.256 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:27:43.257 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:43.257 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:43.257 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:43.257 /dev/nbd0 00:27:43.257 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:43.514 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:43.514 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:27:43.515 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:27:43.515 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:43.515 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:43.515 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:27:43.515 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:27:43.515 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:43.515 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:43.515 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:43.515 1+0 records in 00:27:43.515 1+0 records out 00:27:43.515 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000598237 s, 6.8 MB/s 00:27:43.515 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:43.515 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:27:43.515 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:43.515 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:43.515 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:27:43.515 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:43.515 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:43.515 12:59:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:27:43.515 /dev/nbd1 00:27:43.515 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:43.515 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:43.515 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:27:43.515 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:27:43.515 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:43.515 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:43.515 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:27:43.515 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:27:43.515 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:43.515 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:43.515 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:43.515 1+0 records in 00:27:43.515 1+0 records out 00:27:43.515 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022647 s, 18.1 MB/s 00:27:43.515 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:43.773 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:27:43.773 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:43.773 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:43.773 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:27:43.773 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:43.773 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:43.773 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:27:43.773 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:27:43.773 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:27:43.773 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:43.773 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:43.773 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:27:43.773 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:43.773 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:27:44.032 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:44.032 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:44.032 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:44.032 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:44.032 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:44.032 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:44.032 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:27:44.032 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:27:44.032 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:44.032 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:27:44.291 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:44.291 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:44.291 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:44.291 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:44.291 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:44.291 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:44.291 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:27:44.291 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:27:44.291 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:27:44.291 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:27:44.291 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.291 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:44.291 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.291 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:27:44.291 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.291 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:44.291 [2024-12-05 12:59:26.653023] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:44.291 [2024-12-05 12:59:26.653077] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:44.291 [2024-12-05 12:59:26.653096] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:27:44.291 [2024-12-05 12:59:26.653104] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:44.291 [2024-12-05 12:59:26.654850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:44.291 [2024-12-05 12:59:26.654882] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:44.291 [2024-12-05 12:59:26.654942] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:44.291 [2024-12-05 12:59:26.654983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:44.291 [2024-12-05 12:59:26.655092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:44.291 spare 00:27:44.291 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.291 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:27:44.291 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.291 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:44.291 [2024-12-05 12:59:26.755166] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:27:44.291 [2024-12-05 12:59:26.755216] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:44.291 [2024-12-05 12:59:26.755321] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:27:44.291 [2024-12-05 12:59:26.755451] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:27:44.291 [2024-12-05 12:59:26.755459] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:27:44.291 [2024-12-05 12:59:26.755589] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:44.291 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.291 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:44.291 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:44.291 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:44.291 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:44.291 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:44.291 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:44.291 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:44.291 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:44.291 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:44.291 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:44.291 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:44.292 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:44.292 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.292 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:44.292 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.292 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:44.292 "name": "raid_bdev1", 00:27:44.292 "uuid": "79af873b-19c4-4266-a4f5-b825592df6a6", 00:27:44.292 "strip_size_kb": 0, 00:27:44.292 "state": "online", 00:27:44.292 "raid_level": "raid1", 00:27:44.292 "superblock": true, 00:27:44.292 "num_base_bdevs": 2, 00:27:44.292 "num_base_bdevs_discovered": 2, 00:27:44.292 "num_base_bdevs_operational": 2, 00:27:44.292 "base_bdevs_list": [ 00:27:44.292 { 00:27:44.292 "name": "spare", 00:27:44.292 "uuid": "74aa611e-b732-5b56-a4f8-8daa3bf44a66", 00:27:44.292 "is_configured": true, 00:27:44.292 "data_offset": 256, 00:27:44.292 "data_size": 7936 00:27:44.292 }, 00:27:44.292 { 00:27:44.292 "name": "BaseBdev2", 00:27:44.292 "uuid": "02d9911f-11eb-5fff-a8aa-e530cfaa4013", 00:27:44.292 "is_configured": true, 00:27:44.292 "data_offset": 256, 00:27:44.292 "data_size": 7936 00:27:44.292 } 00:27:44.292 ] 00:27:44.292 }' 00:27:44.292 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:44.292 12:59:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:44.550 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:44.550 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:44.550 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:44.550 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:44.550 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:44.550 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:44.550 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.550 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:44.550 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:44.550 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.808 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:44.808 "name": "raid_bdev1", 00:27:44.808 "uuid": "79af873b-19c4-4266-a4f5-b825592df6a6", 00:27:44.808 "strip_size_kb": 0, 00:27:44.808 "state": "online", 00:27:44.808 "raid_level": "raid1", 00:27:44.808 "superblock": true, 00:27:44.808 "num_base_bdevs": 2, 00:27:44.808 "num_base_bdevs_discovered": 2, 00:27:44.808 "num_base_bdevs_operational": 2, 00:27:44.808 "base_bdevs_list": [ 00:27:44.808 { 00:27:44.808 "name": "spare", 00:27:44.808 "uuid": "74aa611e-b732-5b56-a4f8-8daa3bf44a66", 00:27:44.808 "is_configured": true, 00:27:44.808 "data_offset": 256, 00:27:44.808 "data_size": 7936 00:27:44.808 }, 00:27:44.808 { 00:27:44.808 "name": "BaseBdev2", 00:27:44.808 "uuid": "02d9911f-11eb-5fff-a8aa-e530cfaa4013", 00:27:44.808 "is_configured": true, 00:27:44.808 "data_offset": 256, 00:27:44.808 "data_size": 7936 00:27:44.808 } 00:27:44.808 ] 00:27:44.808 }' 00:27:44.808 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:44.808 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:44.808 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:44.808 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:44.808 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:44.808 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.808 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:44.808 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:27:44.808 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.808 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:27:44.808 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:27:44.808 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.808 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:44.808 [2024-12-05 12:59:27.245166] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:44.808 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.808 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:44.808 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:44.808 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:44.808 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:44.808 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:44.808 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:44.808 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:44.808 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:44.808 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:44.808 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:44.808 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:44.808 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.808 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:44.808 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:44.808 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.808 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:44.808 "name": "raid_bdev1", 00:27:44.808 "uuid": "79af873b-19c4-4266-a4f5-b825592df6a6", 00:27:44.808 "strip_size_kb": 0, 00:27:44.808 "state": "online", 00:27:44.808 "raid_level": "raid1", 00:27:44.808 "superblock": true, 00:27:44.808 "num_base_bdevs": 2, 00:27:44.808 "num_base_bdevs_discovered": 1, 00:27:44.808 "num_base_bdevs_operational": 1, 00:27:44.808 "base_bdevs_list": [ 00:27:44.808 { 00:27:44.808 "name": null, 00:27:44.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:44.808 "is_configured": false, 00:27:44.808 "data_offset": 0, 00:27:44.808 "data_size": 7936 00:27:44.808 }, 00:27:44.808 { 00:27:44.808 "name": "BaseBdev2", 00:27:44.808 "uuid": "02d9911f-11eb-5fff-a8aa-e530cfaa4013", 00:27:44.808 "is_configured": true, 00:27:44.808 "data_offset": 256, 00:27:44.808 "data_size": 7936 00:27:44.808 } 00:27:44.808 ] 00:27:44.808 }' 00:27:44.808 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:44.808 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:45.066 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:27:45.066 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.066 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:45.066 [2024-12-05 12:59:27.609254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:45.066 [2024-12-05 12:59:27.609404] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:45.066 [2024-12-05 12:59:27.609419] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:45.066 [2024-12-05 12:59:27.609455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:45.066 [2024-12-05 12:59:27.616744] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:27:45.066 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.066 12:59:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:27:45.066 [2024-12-05 12:59:27.618291] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:46.438 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:46.438 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:46.438 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:46.438 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:46.438 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:46.438 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:46.438 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:46.438 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.438 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:46.438 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.438 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:46.438 "name": "raid_bdev1", 00:27:46.438 "uuid": "79af873b-19c4-4266-a4f5-b825592df6a6", 00:27:46.438 "strip_size_kb": 0, 00:27:46.438 "state": "online", 00:27:46.438 "raid_level": "raid1", 00:27:46.438 "superblock": true, 00:27:46.438 "num_base_bdevs": 2, 00:27:46.438 "num_base_bdevs_discovered": 2, 00:27:46.438 "num_base_bdevs_operational": 2, 00:27:46.438 "process": { 00:27:46.438 "type": "rebuild", 00:27:46.438 "target": "spare", 00:27:46.438 "progress": { 00:27:46.438 "blocks": 2560, 00:27:46.438 "percent": 32 00:27:46.438 } 00:27:46.438 }, 00:27:46.438 "base_bdevs_list": [ 00:27:46.438 { 00:27:46.438 "name": "spare", 00:27:46.438 "uuid": "74aa611e-b732-5b56-a4f8-8daa3bf44a66", 00:27:46.438 "is_configured": true, 00:27:46.438 "data_offset": 256, 00:27:46.438 "data_size": 7936 00:27:46.438 }, 00:27:46.438 { 00:27:46.438 "name": "BaseBdev2", 00:27:46.438 "uuid": "02d9911f-11eb-5fff-a8aa-e530cfaa4013", 00:27:46.438 "is_configured": true, 00:27:46.438 "data_offset": 256, 00:27:46.438 "data_size": 7936 00:27:46.438 } 00:27:46.438 ] 00:27:46.438 }' 00:27:46.438 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:46.438 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:46.438 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:46.438 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:46.438 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:27:46.438 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.438 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:46.438 [2024-12-05 12:59:28.725051] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:46.438 [2024-12-05 12:59:28.823906] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:46.438 [2024-12-05 12:59:28.823972] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:46.438 [2024-12-05 12:59:28.823984] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:46.438 [2024-12-05 12:59:28.823991] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:46.438 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.438 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:46.438 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:46.438 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:46.438 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:46.438 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:46.438 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:46.438 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:46.438 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:46.438 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:46.438 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:46.438 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:46.438 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:46.438 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.438 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:46.438 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.438 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:46.438 "name": "raid_bdev1", 00:27:46.438 "uuid": "79af873b-19c4-4266-a4f5-b825592df6a6", 00:27:46.438 "strip_size_kb": 0, 00:27:46.438 "state": "online", 00:27:46.438 "raid_level": "raid1", 00:27:46.438 "superblock": true, 00:27:46.438 "num_base_bdevs": 2, 00:27:46.438 "num_base_bdevs_discovered": 1, 00:27:46.438 "num_base_bdevs_operational": 1, 00:27:46.438 "base_bdevs_list": [ 00:27:46.438 { 00:27:46.438 "name": null, 00:27:46.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:46.439 "is_configured": false, 00:27:46.439 "data_offset": 0, 00:27:46.439 "data_size": 7936 00:27:46.439 }, 00:27:46.439 { 00:27:46.439 "name": "BaseBdev2", 00:27:46.439 "uuid": "02d9911f-11eb-5fff-a8aa-e530cfaa4013", 00:27:46.439 "is_configured": true, 00:27:46.439 "data_offset": 256, 00:27:46.439 "data_size": 7936 00:27:46.439 } 00:27:46.439 ] 00:27:46.439 }' 00:27:46.439 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:46.439 12:59:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:46.697 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:27:46.697 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.697 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:46.697 [2024-12-05 12:59:29.160330] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:46.697 [2024-12-05 12:59:29.160383] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:46.697 [2024-12-05 12:59:29.160403] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:27:46.697 [2024-12-05 12:59:29.160414] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:46.697 [2024-12-05 12:59:29.160617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:46.697 [2024-12-05 12:59:29.160629] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:46.698 [2024-12-05 12:59:29.160676] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:46.698 [2024-12-05 12:59:29.160687] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:46.698 [2024-12-05 12:59:29.160694] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:46.698 [2024-12-05 12:59:29.160724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:46.698 [2024-12-05 12:59:29.167896] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:27:46.698 spare 00:27:46.698 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.698 12:59:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:27:46.698 [2024-12-05 12:59:29.169439] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:47.677 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:47.677 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:47.677 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:47.677 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:47.677 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:47.677 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:47.677 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:47.677 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.677 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:47.677 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.677 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:47.677 "name": "raid_bdev1", 00:27:47.677 "uuid": "79af873b-19c4-4266-a4f5-b825592df6a6", 00:27:47.677 "strip_size_kb": 0, 00:27:47.677 "state": "online", 00:27:47.677 "raid_level": "raid1", 00:27:47.677 "superblock": true, 00:27:47.677 "num_base_bdevs": 2, 00:27:47.677 "num_base_bdevs_discovered": 2, 00:27:47.677 "num_base_bdevs_operational": 2, 00:27:47.677 "process": { 00:27:47.677 "type": "rebuild", 00:27:47.677 "target": "spare", 00:27:47.677 "progress": { 00:27:47.677 "blocks": 2560, 00:27:47.677 "percent": 32 00:27:47.677 } 00:27:47.677 }, 00:27:47.677 "base_bdevs_list": [ 00:27:47.677 { 00:27:47.677 "name": "spare", 00:27:47.677 "uuid": "74aa611e-b732-5b56-a4f8-8daa3bf44a66", 00:27:47.677 "is_configured": true, 00:27:47.677 "data_offset": 256, 00:27:47.677 "data_size": 7936 00:27:47.677 }, 00:27:47.677 { 00:27:47.677 "name": "BaseBdev2", 00:27:47.677 "uuid": "02d9911f-11eb-5fff-a8aa-e530cfaa4013", 00:27:47.677 "is_configured": true, 00:27:47.677 "data_offset": 256, 00:27:47.677 "data_size": 7936 00:27:47.677 } 00:27:47.677 ] 00:27:47.677 }' 00:27:47.677 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:47.677 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:47.677 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:47.933 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:47.933 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:27:47.933 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.934 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:47.934 [2024-12-05 12:59:30.268071] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:47.934 [2024-12-05 12:59:30.274457] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:47.934 [2024-12-05 12:59:30.274516] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:47.934 [2024-12-05 12:59:30.274531] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:47.934 [2024-12-05 12:59:30.274537] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:47.934 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.934 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:47.934 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:47.934 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:47.934 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:47.934 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:47.934 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:47.934 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:47.934 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:47.934 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:47.934 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:47.934 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:47.934 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.934 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:47.934 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:47.934 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.934 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:47.934 "name": "raid_bdev1", 00:27:47.934 "uuid": "79af873b-19c4-4266-a4f5-b825592df6a6", 00:27:47.934 "strip_size_kb": 0, 00:27:47.934 "state": "online", 00:27:47.934 "raid_level": "raid1", 00:27:47.934 "superblock": true, 00:27:47.934 "num_base_bdevs": 2, 00:27:47.934 "num_base_bdevs_discovered": 1, 00:27:47.934 "num_base_bdevs_operational": 1, 00:27:47.934 "base_bdevs_list": [ 00:27:47.934 { 00:27:47.934 "name": null, 00:27:47.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:47.934 "is_configured": false, 00:27:47.934 "data_offset": 0, 00:27:47.934 "data_size": 7936 00:27:47.934 }, 00:27:47.934 { 00:27:47.934 "name": "BaseBdev2", 00:27:47.934 "uuid": "02d9911f-11eb-5fff-a8aa-e530cfaa4013", 00:27:47.934 "is_configured": true, 00:27:47.934 "data_offset": 256, 00:27:47.934 "data_size": 7936 00:27:47.934 } 00:27:47.934 ] 00:27:47.934 }' 00:27:47.934 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:47.934 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:48.192 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:48.193 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:48.193 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:48.193 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:48.193 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:48.193 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:48.193 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.193 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:48.193 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:48.193 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.193 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:48.193 "name": "raid_bdev1", 00:27:48.193 "uuid": "79af873b-19c4-4266-a4f5-b825592df6a6", 00:27:48.193 "strip_size_kb": 0, 00:27:48.193 "state": "online", 00:27:48.193 "raid_level": "raid1", 00:27:48.193 "superblock": true, 00:27:48.193 "num_base_bdevs": 2, 00:27:48.193 "num_base_bdevs_discovered": 1, 00:27:48.193 "num_base_bdevs_operational": 1, 00:27:48.193 "base_bdevs_list": [ 00:27:48.193 { 00:27:48.193 "name": null, 00:27:48.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:48.193 "is_configured": false, 00:27:48.193 "data_offset": 0, 00:27:48.193 "data_size": 7936 00:27:48.193 }, 00:27:48.193 { 00:27:48.193 "name": "BaseBdev2", 00:27:48.193 "uuid": "02d9911f-11eb-5fff-a8aa-e530cfaa4013", 00:27:48.193 "is_configured": true, 00:27:48.193 "data_offset": 256, 00:27:48.193 "data_size": 7936 00:27:48.193 } 00:27:48.193 ] 00:27:48.193 }' 00:27:48.193 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:48.193 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:48.193 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:48.193 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:48.193 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:27:48.193 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.193 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:48.193 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.193 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:48.193 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.193 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:48.193 [2024-12-05 12:59:30.694769] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:48.193 [2024-12-05 12:59:30.694812] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:48.193 [2024-12-05 12:59:30.694829] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:27:48.193 [2024-12-05 12:59:30.694836] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:48.193 [2024-12-05 12:59:30.695006] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:48.193 [2024-12-05 12:59:30.695015] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:48.193 [2024-12-05 12:59:30.695056] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:27:48.193 [2024-12-05 12:59:30.695066] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:48.193 [2024-12-05 12:59:30.695073] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:48.193 [2024-12-05 12:59:30.695081] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:27:48.193 BaseBdev1 00:27:48.193 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.193 12:59:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:27:49.125 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:49.125 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:49.125 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:49.125 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:49.125 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:49.125 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:49.125 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:49.125 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:49.125 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:49.125 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:49.125 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:49.125 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.125 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:49.125 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:49.447 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.447 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:49.447 "name": "raid_bdev1", 00:27:49.447 "uuid": "79af873b-19c4-4266-a4f5-b825592df6a6", 00:27:49.447 "strip_size_kb": 0, 00:27:49.447 "state": "online", 00:27:49.447 "raid_level": "raid1", 00:27:49.447 "superblock": true, 00:27:49.447 "num_base_bdevs": 2, 00:27:49.447 "num_base_bdevs_discovered": 1, 00:27:49.447 "num_base_bdevs_operational": 1, 00:27:49.447 "base_bdevs_list": [ 00:27:49.447 { 00:27:49.447 "name": null, 00:27:49.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:49.447 "is_configured": false, 00:27:49.447 "data_offset": 0, 00:27:49.447 "data_size": 7936 00:27:49.447 }, 00:27:49.447 { 00:27:49.447 "name": "BaseBdev2", 00:27:49.447 "uuid": "02d9911f-11eb-5fff-a8aa-e530cfaa4013", 00:27:49.447 "is_configured": true, 00:27:49.447 "data_offset": 256, 00:27:49.447 "data_size": 7936 00:27:49.447 } 00:27:49.447 ] 00:27:49.447 }' 00:27:49.447 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:49.447 12:59:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:49.706 12:59:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:49.706 12:59:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:49.706 12:59:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:49.706 12:59:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:49.706 12:59:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:49.706 12:59:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:49.706 12:59:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.706 12:59:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:49.706 12:59:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:49.706 12:59:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.706 12:59:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:49.706 "name": "raid_bdev1", 00:27:49.706 "uuid": "79af873b-19c4-4266-a4f5-b825592df6a6", 00:27:49.706 "strip_size_kb": 0, 00:27:49.706 "state": "online", 00:27:49.706 "raid_level": "raid1", 00:27:49.706 "superblock": true, 00:27:49.706 "num_base_bdevs": 2, 00:27:49.706 "num_base_bdevs_discovered": 1, 00:27:49.706 "num_base_bdevs_operational": 1, 00:27:49.706 "base_bdevs_list": [ 00:27:49.706 { 00:27:49.706 "name": null, 00:27:49.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:49.706 "is_configured": false, 00:27:49.706 "data_offset": 0, 00:27:49.706 "data_size": 7936 00:27:49.706 }, 00:27:49.706 { 00:27:49.706 "name": "BaseBdev2", 00:27:49.706 "uuid": "02d9911f-11eb-5fff-a8aa-e530cfaa4013", 00:27:49.706 "is_configured": true, 00:27:49.706 "data_offset": 256, 00:27:49.706 "data_size": 7936 00:27:49.706 } 00:27:49.706 ] 00:27:49.706 }' 00:27:49.706 12:59:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:49.706 12:59:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:49.706 12:59:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:49.706 12:59:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:49.706 12:59:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:49.706 12:59:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:27:49.706 12:59:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:49.706 12:59:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:49.706 12:59:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:49.706 12:59:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:49.706 12:59:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:49.706 12:59:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:49.706 12:59:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.706 12:59:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:49.706 [2024-12-05 12:59:32.127079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:49.706 [2024-12-05 12:59:32.127200] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:49.706 [2024-12-05 12:59:32.127211] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:49.706 request: 00:27:49.706 { 00:27:49.706 "base_bdev": "BaseBdev1", 00:27:49.706 "raid_bdev": "raid_bdev1", 00:27:49.706 "method": "bdev_raid_add_base_bdev", 00:27:49.706 "req_id": 1 00:27:49.706 } 00:27:49.706 Got JSON-RPC error response 00:27:49.706 response: 00:27:49.706 { 00:27:49.706 "code": -22, 00:27:49.706 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:27:49.706 } 00:27:49.706 12:59:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:49.706 12:59:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:27:49.706 12:59:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:49.706 12:59:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:49.706 12:59:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:49.706 12:59:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:27:50.640 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:50.640 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:50.640 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:50.640 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:50.640 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:50.640 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:50.640 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:50.640 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:50.640 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:50.640 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:50.640 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:50.640 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.640 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:50.640 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:50.640 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.640 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:50.640 "name": "raid_bdev1", 00:27:50.640 "uuid": "79af873b-19c4-4266-a4f5-b825592df6a6", 00:27:50.640 "strip_size_kb": 0, 00:27:50.640 "state": "online", 00:27:50.640 "raid_level": "raid1", 00:27:50.640 "superblock": true, 00:27:50.640 "num_base_bdevs": 2, 00:27:50.640 "num_base_bdevs_discovered": 1, 00:27:50.640 "num_base_bdevs_operational": 1, 00:27:50.640 "base_bdevs_list": [ 00:27:50.640 { 00:27:50.640 "name": null, 00:27:50.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:50.640 "is_configured": false, 00:27:50.640 "data_offset": 0, 00:27:50.640 "data_size": 7936 00:27:50.640 }, 00:27:50.640 { 00:27:50.640 "name": "BaseBdev2", 00:27:50.640 "uuid": "02d9911f-11eb-5fff-a8aa-e530cfaa4013", 00:27:50.640 "is_configured": true, 00:27:50.640 "data_offset": 256, 00:27:50.640 "data_size": 7936 00:27:50.640 } 00:27:50.640 ] 00:27:50.640 }' 00:27:50.640 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:50.640 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:50.898 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:50.898 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:50.898 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:50.898 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:50.898 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:50.898 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:50.898 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.898 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:50.898 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:50.898 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.898 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:50.898 "name": "raid_bdev1", 00:27:50.898 "uuid": "79af873b-19c4-4266-a4f5-b825592df6a6", 00:27:50.898 "strip_size_kb": 0, 00:27:50.898 "state": "online", 00:27:50.898 "raid_level": "raid1", 00:27:50.898 "superblock": true, 00:27:50.898 "num_base_bdevs": 2, 00:27:50.898 "num_base_bdevs_discovered": 1, 00:27:50.898 "num_base_bdevs_operational": 1, 00:27:50.898 "base_bdevs_list": [ 00:27:50.898 { 00:27:50.898 "name": null, 00:27:50.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:50.898 "is_configured": false, 00:27:50.898 "data_offset": 0, 00:27:50.898 "data_size": 7936 00:27:50.898 }, 00:27:50.898 { 00:27:50.898 "name": "BaseBdev2", 00:27:50.898 "uuid": "02d9911f-11eb-5fff-a8aa-e530cfaa4013", 00:27:50.898 "is_configured": true, 00:27:50.898 "data_offset": 256, 00:27:50.898 "data_size": 7936 00:27:50.898 } 00:27:50.898 ] 00:27:50.898 }' 00:27:50.898 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:51.156 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:51.156 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:51.156 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:51.156 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 84984 00:27:51.156 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 84984 ']' 00:27:51.156 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 84984 00:27:51.156 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:27:51.156 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:51.156 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84984 00:27:51.156 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:51.156 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:51.156 killing process with pid 84984 00:27:51.156 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84984' 00:27:51.156 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 84984 00:27:51.156 Received shutdown signal, test time was about 60.000000 seconds 00:27:51.156 00:27:51.156 Latency(us) 00:27:51.156 [2024-12-05T12:59:33.743Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.156 [2024-12-05T12:59:33.743Z] =================================================================================================================== 00:27:51.156 [2024-12-05T12:59:33.743Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:51.156 [2024-12-05 12:59:33.547480] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:51.156 12:59:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 84984 00:27:51.156 [2024-12-05 12:59:33.547587] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:51.156 [2024-12-05 12:59:33.547625] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:51.156 [2024-12-05 12:59:33.547636] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:27:51.156 [2024-12-05 12:59:33.707433] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:51.722 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:27:51.722 00:27:51.722 real 0m16.999s 00:27:51.722 user 0m21.665s 00:27:51.722 sys 0m1.828s 00:27:51.722 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:51.722 ************************************ 00:27:51.722 END TEST raid_rebuild_test_sb_md_separate 00:27:51.722 ************************************ 00:27:51.723 12:59:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:51.981 12:59:34 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:27:51.981 12:59:34 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:27:51.981 12:59:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:27:51.981 12:59:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:51.981 12:59:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:51.981 ************************************ 00:27:51.981 START TEST raid_state_function_test_sb_md_interleaved 00:27:51.981 ************************************ 00:27:51.981 12:59:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:27:51.981 12:59:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:27:51.981 12:59:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:27:51.981 12:59:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:27:51.981 12:59:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:27:51.981 12:59:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:27:51.981 12:59:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:51.981 12:59:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:27:51.981 12:59:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:51.981 12:59:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:51.981 12:59:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:27:51.981 12:59:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:51.981 12:59:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:51.981 12:59:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:51.981 12:59:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:27:51.981 12:59:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:27:51.981 12:59:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:27:51.981 12:59:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:27:51.981 12:59:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:27:51.981 12:59:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:27:51.981 12:59:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:27:51.981 12:59:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:27:51.981 12:59:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:27:51.981 Process raid pid: 85647 00:27:51.981 12:59:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=85647 00:27:51.981 12:59:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85647' 00:27:51.981 12:59:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 85647 00:27:51.981 12:59:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 85647 ']' 00:27:51.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:51.981 12:59:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:51.981 12:59:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:51.981 12:59:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:27:51.981 12:59:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:51.981 12:59:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:51.981 12:59:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:51.981 [2024-12-05 12:59:34.412427] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:27:51.981 [2024-12-05 12:59:34.412569] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:52.239 [2024-12-05 12:59:34.567706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.239 [2024-12-05 12:59:34.653818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.239 [2024-12-05 12:59:34.765250] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:52.239 [2024-12-05 12:59:34.765290] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:52.804 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:52.804 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:27:52.804 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:27:52.804 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.804 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:52.804 [2024-12-05 12:59:35.222113] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:52.804 [2024-12-05 12:59:35.222168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:52.804 [2024-12-05 12:59:35.222176] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:52.804 [2024-12-05 12:59:35.222184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:52.804 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.804 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:52.804 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:52.804 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:52.804 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:52.804 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:52.804 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:52.804 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:52.804 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:52.804 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:52.804 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:52.804 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:52.804 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:52.804 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.804 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:52.804 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.804 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:52.804 "name": "Existed_Raid", 00:27:52.804 "uuid": "995287c0-8adc-4d0d-ac25-12063e1f5ffd", 00:27:52.804 "strip_size_kb": 0, 00:27:52.804 "state": "configuring", 00:27:52.804 "raid_level": "raid1", 00:27:52.804 "superblock": true, 00:27:52.804 "num_base_bdevs": 2, 00:27:52.804 "num_base_bdevs_discovered": 0, 00:27:52.804 "num_base_bdevs_operational": 2, 00:27:52.804 "base_bdevs_list": [ 00:27:52.804 { 00:27:52.804 "name": "BaseBdev1", 00:27:52.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:52.804 "is_configured": false, 00:27:52.804 "data_offset": 0, 00:27:52.804 "data_size": 0 00:27:52.804 }, 00:27:52.804 { 00:27:52.804 "name": "BaseBdev2", 00:27:52.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:52.804 "is_configured": false, 00:27:52.804 "data_offset": 0, 00:27:52.804 "data_size": 0 00:27:52.804 } 00:27:52.804 ] 00:27:52.804 }' 00:27:52.804 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:52.804 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:53.062 [2024-12-05 12:59:35.534160] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:53.062 [2024-12-05 12:59:35.534200] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:53.062 [2024-12-05 12:59:35.542170] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:53.062 [2024-12-05 12:59:35.542217] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:53.062 [2024-12-05 12:59:35.542225] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:53.062 [2024-12-05 12:59:35.542235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:53.062 [2024-12-05 12:59:35.572886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:53.062 BaseBdev1 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:53.062 [ 00:27:53.062 { 00:27:53.062 "name": "BaseBdev1", 00:27:53.062 "aliases": [ 00:27:53.062 "fc6943a6-4863-4699-9054-c7d914af85ba" 00:27:53.062 ], 00:27:53.062 "product_name": "Malloc disk", 00:27:53.062 "block_size": 4128, 00:27:53.062 "num_blocks": 8192, 00:27:53.062 "uuid": "fc6943a6-4863-4699-9054-c7d914af85ba", 00:27:53.062 "md_size": 32, 00:27:53.062 "md_interleave": true, 00:27:53.062 "dif_type": 0, 00:27:53.062 "assigned_rate_limits": { 00:27:53.062 "rw_ios_per_sec": 0, 00:27:53.062 "rw_mbytes_per_sec": 0, 00:27:53.062 "r_mbytes_per_sec": 0, 00:27:53.062 "w_mbytes_per_sec": 0 00:27:53.062 }, 00:27:53.062 "claimed": true, 00:27:53.062 "claim_type": "exclusive_write", 00:27:53.062 "zoned": false, 00:27:53.062 "supported_io_types": { 00:27:53.062 "read": true, 00:27:53.062 "write": true, 00:27:53.062 "unmap": true, 00:27:53.062 "flush": true, 00:27:53.062 "reset": true, 00:27:53.062 "nvme_admin": false, 00:27:53.062 "nvme_io": false, 00:27:53.062 "nvme_io_md": false, 00:27:53.062 "write_zeroes": true, 00:27:53.062 "zcopy": true, 00:27:53.062 "get_zone_info": false, 00:27:53.062 "zone_management": false, 00:27:53.062 "zone_append": false, 00:27:53.062 "compare": false, 00:27:53.062 "compare_and_write": false, 00:27:53.062 "abort": true, 00:27:53.062 "seek_hole": false, 00:27:53.062 "seek_data": false, 00:27:53.062 "copy": true, 00:27:53.062 "nvme_iov_md": false 00:27:53.062 }, 00:27:53.062 "memory_domains": [ 00:27:53.062 { 00:27:53.062 "dma_device_id": "system", 00:27:53.062 "dma_device_type": 1 00:27:53.062 }, 00:27:53.062 { 00:27:53.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:53.062 "dma_device_type": 2 00:27:53.062 } 00:27:53.062 ], 00:27:53.062 "driver_specific": {} 00:27:53.062 } 00:27:53.062 ] 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:53.062 "name": "Existed_Raid", 00:27:53.062 "uuid": "309eb0d4-c36d-4879-b01b-7700e0e09424", 00:27:53.062 "strip_size_kb": 0, 00:27:53.062 "state": "configuring", 00:27:53.062 "raid_level": "raid1", 00:27:53.062 "superblock": true, 00:27:53.062 "num_base_bdevs": 2, 00:27:53.062 "num_base_bdevs_discovered": 1, 00:27:53.062 "num_base_bdevs_operational": 2, 00:27:53.062 "base_bdevs_list": [ 00:27:53.062 { 00:27:53.062 "name": "BaseBdev1", 00:27:53.062 "uuid": "fc6943a6-4863-4699-9054-c7d914af85ba", 00:27:53.062 "is_configured": true, 00:27:53.062 "data_offset": 256, 00:27:53.062 "data_size": 7936 00:27:53.062 }, 00:27:53.062 { 00:27:53.062 "name": "BaseBdev2", 00:27:53.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:53.062 "is_configured": false, 00:27:53.062 "data_offset": 0, 00:27:53.062 "data_size": 0 00:27:53.062 } 00:27:53.062 ] 00:27:53.062 }' 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:53.062 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:53.625 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:53.625 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.625 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:53.625 [2024-12-05 12:59:35.953001] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:53.625 [2024-12-05 12:59:35.953046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:27:53.625 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.625 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:27:53.625 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.625 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:53.625 [2024-12-05 12:59:35.961046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:53.625 [2024-12-05 12:59:35.962635] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:53.626 [2024-12-05 12:59:35.962673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:53.626 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.626 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:27:53.626 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:53.626 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:53.626 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:53.626 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:53.626 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:53.626 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:53.626 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:53.626 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:53.626 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:53.626 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:53.626 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:53.626 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:53.626 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:53.626 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.626 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:53.626 12:59:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.626 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:53.626 "name": "Existed_Raid", 00:27:53.626 "uuid": "8ed47a6b-f670-4d31-8f10-89958a2baf55", 00:27:53.626 "strip_size_kb": 0, 00:27:53.626 "state": "configuring", 00:27:53.626 "raid_level": "raid1", 00:27:53.626 "superblock": true, 00:27:53.626 "num_base_bdevs": 2, 00:27:53.626 "num_base_bdevs_discovered": 1, 00:27:53.626 "num_base_bdevs_operational": 2, 00:27:53.626 "base_bdevs_list": [ 00:27:53.626 { 00:27:53.626 "name": "BaseBdev1", 00:27:53.626 "uuid": "fc6943a6-4863-4699-9054-c7d914af85ba", 00:27:53.626 "is_configured": true, 00:27:53.626 "data_offset": 256, 00:27:53.626 "data_size": 7936 00:27:53.626 }, 00:27:53.626 { 00:27:53.626 "name": "BaseBdev2", 00:27:53.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:53.626 "is_configured": false, 00:27:53.626 "data_offset": 0, 00:27:53.626 "data_size": 0 00:27:53.626 } 00:27:53.626 ] 00:27:53.626 }' 00:27:53.626 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:53.626 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:53.883 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:27:53.883 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.883 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:53.883 [2024-12-05 12:59:36.303446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:53.883 [2024-12-05 12:59:36.303635] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:53.883 [2024-12-05 12:59:36.303645] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:27:53.883 [2024-12-05 12:59:36.303710] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:27:53.883 [2024-12-05 12:59:36.303773] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:53.883 [2024-12-05 12:59:36.303787] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:27:53.883 [2024-12-05 12:59:36.303835] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:53.883 BaseBdev2 00:27:53.883 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.883 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:27:53.883 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:27:53.883 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:53.883 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:27:53.883 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:53.883 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:53.883 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:53.883 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.883 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:53.883 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.883 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:53.883 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.883 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:53.883 [ 00:27:53.883 { 00:27:53.883 "name": "BaseBdev2", 00:27:53.883 "aliases": [ 00:27:53.883 "70d817d4-8db2-4b58-bf1f-b7c2851c081a" 00:27:53.883 ], 00:27:53.883 "product_name": "Malloc disk", 00:27:53.883 "block_size": 4128, 00:27:53.883 "num_blocks": 8192, 00:27:53.883 "uuid": "70d817d4-8db2-4b58-bf1f-b7c2851c081a", 00:27:53.883 "md_size": 32, 00:27:53.883 "md_interleave": true, 00:27:53.883 "dif_type": 0, 00:27:53.883 "assigned_rate_limits": { 00:27:53.883 "rw_ios_per_sec": 0, 00:27:53.883 "rw_mbytes_per_sec": 0, 00:27:53.883 "r_mbytes_per_sec": 0, 00:27:53.883 "w_mbytes_per_sec": 0 00:27:53.883 }, 00:27:53.883 "claimed": true, 00:27:53.883 "claim_type": "exclusive_write", 00:27:53.883 "zoned": false, 00:27:53.883 "supported_io_types": { 00:27:53.883 "read": true, 00:27:53.883 "write": true, 00:27:53.883 "unmap": true, 00:27:53.883 "flush": true, 00:27:53.883 "reset": true, 00:27:53.883 "nvme_admin": false, 00:27:53.883 "nvme_io": false, 00:27:53.883 "nvme_io_md": false, 00:27:53.883 "write_zeroes": true, 00:27:53.883 "zcopy": true, 00:27:53.883 "get_zone_info": false, 00:27:53.883 "zone_management": false, 00:27:53.883 "zone_append": false, 00:27:53.883 "compare": false, 00:27:53.883 "compare_and_write": false, 00:27:53.883 "abort": true, 00:27:53.883 "seek_hole": false, 00:27:53.883 "seek_data": false, 00:27:53.883 "copy": true, 00:27:53.883 "nvme_iov_md": false 00:27:53.883 }, 00:27:53.883 "memory_domains": [ 00:27:53.883 { 00:27:53.883 "dma_device_id": "system", 00:27:53.883 "dma_device_type": 1 00:27:53.883 }, 00:27:53.883 { 00:27:53.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:53.883 "dma_device_type": 2 00:27:53.883 } 00:27:53.883 ], 00:27:53.883 "driver_specific": {} 00:27:53.883 } 00:27:53.883 ] 00:27:53.883 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.883 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:27:53.883 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:53.883 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:53.883 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:27:53.883 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:53.883 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:53.883 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:53.883 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:53.883 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:53.883 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:53.883 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:53.883 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:53.883 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:53.883 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:53.883 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:53.883 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.883 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:53.883 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.883 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:53.883 "name": "Existed_Raid", 00:27:53.883 "uuid": "8ed47a6b-f670-4d31-8f10-89958a2baf55", 00:27:53.883 "strip_size_kb": 0, 00:27:53.883 "state": "online", 00:27:53.883 "raid_level": "raid1", 00:27:53.883 "superblock": true, 00:27:53.883 "num_base_bdevs": 2, 00:27:53.883 "num_base_bdevs_discovered": 2, 00:27:53.883 "num_base_bdevs_operational": 2, 00:27:53.883 "base_bdevs_list": [ 00:27:53.883 { 00:27:53.883 "name": "BaseBdev1", 00:27:53.883 "uuid": "fc6943a6-4863-4699-9054-c7d914af85ba", 00:27:53.883 "is_configured": true, 00:27:53.884 "data_offset": 256, 00:27:53.884 "data_size": 7936 00:27:53.884 }, 00:27:53.884 { 00:27:53.884 "name": "BaseBdev2", 00:27:53.884 "uuid": "70d817d4-8db2-4b58-bf1f-b7c2851c081a", 00:27:53.884 "is_configured": true, 00:27:53.884 "data_offset": 256, 00:27:53.884 "data_size": 7936 00:27:53.884 } 00:27:53.884 ] 00:27:53.884 }' 00:27:53.884 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:53.884 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:54.140 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:27:54.140 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:27:54.140 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:54.140 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:54.140 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:27:54.140 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:54.140 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:27:54.140 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.140 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:54.140 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:54.140 [2024-12-05 12:59:36.667837] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:54.140 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.140 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:54.140 "name": "Existed_Raid", 00:27:54.140 "aliases": [ 00:27:54.140 "8ed47a6b-f670-4d31-8f10-89958a2baf55" 00:27:54.140 ], 00:27:54.140 "product_name": "Raid Volume", 00:27:54.140 "block_size": 4128, 00:27:54.140 "num_blocks": 7936, 00:27:54.140 "uuid": "8ed47a6b-f670-4d31-8f10-89958a2baf55", 00:27:54.140 "md_size": 32, 00:27:54.140 "md_interleave": true, 00:27:54.140 "dif_type": 0, 00:27:54.140 "assigned_rate_limits": { 00:27:54.140 "rw_ios_per_sec": 0, 00:27:54.140 "rw_mbytes_per_sec": 0, 00:27:54.140 "r_mbytes_per_sec": 0, 00:27:54.140 "w_mbytes_per_sec": 0 00:27:54.140 }, 00:27:54.140 "claimed": false, 00:27:54.140 "zoned": false, 00:27:54.140 "supported_io_types": { 00:27:54.140 "read": true, 00:27:54.140 "write": true, 00:27:54.140 "unmap": false, 00:27:54.140 "flush": false, 00:27:54.140 "reset": true, 00:27:54.140 "nvme_admin": false, 00:27:54.140 "nvme_io": false, 00:27:54.140 "nvme_io_md": false, 00:27:54.140 "write_zeroes": true, 00:27:54.140 "zcopy": false, 00:27:54.140 "get_zone_info": false, 00:27:54.140 "zone_management": false, 00:27:54.140 "zone_append": false, 00:27:54.140 "compare": false, 00:27:54.140 "compare_and_write": false, 00:27:54.140 "abort": false, 00:27:54.140 "seek_hole": false, 00:27:54.140 "seek_data": false, 00:27:54.140 "copy": false, 00:27:54.140 "nvme_iov_md": false 00:27:54.140 }, 00:27:54.140 "memory_domains": [ 00:27:54.140 { 00:27:54.140 "dma_device_id": "system", 00:27:54.140 "dma_device_type": 1 00:27:54.140 }, 00:27:54.140 { 00:27:54.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:54.140 "dma_device_type": 2 00:27:54.140 }, 00:27:54.140 { 00:27:54.140 "dma_device_id": "system", 00:27:54.140 "dma_device_type": 1 00:27:54.140 }, 00:27:54.140 { 00:27:54.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:54.140 "dma_device_type": 2 00:27:54.140 } 00:27:54.140 ], 00:27:54.140 "driver_specific": { 00:27:54.140 "raid": { 00:27:54.140 "uuid": "8ed47a6b-f670-4d31-8f10-89958a2baf55", 00:27:54.140 "strip_size_kb": 0, 00:27:54.140 "state": "online", 00:27:54.140 "raid_level": "raid1", 00:27:54.140 "superblock": true, 00:27:54.140 "num_base_bdevs": 2, 00:27:54.140 "num_base_bdevs_discovered": 2, 00:27:54.140 "num_base_bdevs_operational": 2, 00:27:54.140 "base_bdevs_list": [ 00:27:54.140 { 00:27:54.140 "name": "BaseBdev1", 00:27:54.140 "uuid": "fc6943a6-4863-4699-9054-c7d914af85ba", 00:27:54.140 "is_configured": true, 00:27:54.140 "data_offset": 256, 00:27:54.140 "data_size": 7936 00:27:54.140 }, 00:27:54.140 { 00:27:54.140 "name": "BaseBdev2", 00:27:54.140 "uuid": "70d817d4-8db2-4b58-bf1f-b7c2851c081a", 00:27:54.140 "is_configured": true, 00:27:54.140 "data_offset": 256, 00:27:54.140 "data_size": 7936 00:27:54.140 } 00:27:54.140 ] 00:27:54.140 } 00:27:54.140 } 00:27:54.140 }' 00:27:54.140 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:27:54.409 BaseBdev2' 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:54.409 [2024-12-05 12:59:36.831649] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:54.409 "name": "Existed_Raid", 00:27:54.409 "uuid": "8ed47a6b-f670-4d31-8f10-89958a2baf55", 00:27:54.409 "strip_size_kb": 0, 00:27:54.409 "state": "online", 00:27:54.409 "raid_level": "raid1", 00:27:54.409 "superblock": true, 00:27:54.409 "num_base_bdevs": 2, 00:27:54.409 "num_base_bdevs_discovered": 1, 00:27:54.409 "num_base_bdevs_operational": 1, 00:27:54.409 "base_bdevs_list": [ 00:27:54.409 { 00:27:54.409 "name": null, 00:27:54.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:54.409 "is_configured": false, 00:27:54.409 "data_offset": 0, 00:27:54.409 "data_size": 7936 00:27:54.409 }, 00:27:54.409 { 00:27:54.409 "name": "BaseBdev2", 00:27:54.409 "uuid": "70d817d4-8db2-4b58-bf1f-b7c2851c081a", 00:27:54.409 "is_configured": true, 00:27:54.409 "data_offset": 256, 00:27:54.409 "data_size": 7936 00:27:54.409 } 00:27:54.409 ] 00:27:54.409 }' 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:54.409 12:59:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:54.718 12:59:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:27:54.718 12:59:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:54.718 12:59:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:54.718 12:59:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.718 12:59:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:54.718 12:59:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:54.718 12:59:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.718 12:59:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:54.718 12:59:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:54.718 12:59:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:27:54.718 12:59:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.718 12:59:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:54.718 [2024-12-05 12:59:37.223405] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:54.718 [2024-12-05 12:59:37.223509] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:54.718 [2024-12-05 12:59:37.271175] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:54.718 [2024-12-05 12:59:37.271225] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:54.718 [2024-12-05 12:59:37.271234] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:27:54.718 12:59:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.718 12:59:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:54.718 12:59:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:54.718 12:59:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:27:54.718 12:59:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:54.718 12:59:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.718 12:59:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:54.718 12:59:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.718 12:59:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:27:54.718 12:59:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:27:54.718 12:59:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:27:54.718 12:59:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 85647 00:27:54.974 12:59:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 85647 ']' 00:27:54.974 12:59:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 85647 00:27:54.974 12:59:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:27:54.974 12:59:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:54.974 12:59:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85647 00:27:54.974 12:59:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:54.974 killing process with pid 85647 00:27:54.974 12:59:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:54.974 12:59:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85647' 00:27:54.974 12:59:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 85647 00:27:54.974 [2024-12-05 12:59:37.328133] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:54.974 12:59:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 85647 00:27:54.974 [2024-12-05 12:59:37.336629] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:55.538 12:59:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:27:55.538 00:27:55.538 real 0m3.587s 00:27:55.538 user 0m5.261s 00:27:55.538 sys 0m0.560s 00:27:55.538 12:59:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:55.538 ************************************ 00:27:55.538 END TEST raid_state_function_test_sb_md_interleaved 00:27:55.538 ************************************ 00:27:55.538 12:59:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:55.538 12:59:37 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:27:55.538 12:59:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:55.538 12:59:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:55.538 12:59:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:55.538 ************************************ 00:27:55.538 START TEST raid_superblock_test_md_interleaved 00:27:55.538 ************************************ 00:27:55.538 12:59:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:27:55.538 12:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:27:55.538 12:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:27:55.538 12:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:27:55.538 12:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:27:55.538 12:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:27:55.538 12:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:27:55.538 12:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:27:55.538 12:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:27:55.538 12:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:27:55.538 12:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:27:55.538 12:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:27:55.538 12:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:27:55.538 12:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:27:55.538 12:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:27:55.538 12:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:27:55.538 12:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=85881 00:27:55.538 12:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 85881 00:27:55.538 12:59:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 85881 ']' 00:27:55.538 12:59:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:55.538 12:59:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:55.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:55.538 12:59:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:55.538 12:59:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:55.538 12:59:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:55.538 12:59:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:27:55.538 [2024-12-05 12:59:38.031811] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:27:55.538 [2024-12-05 12:59:38.031943] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85881 ] 00:27:55.795 [2024-12-05 12:59:38.183688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.795 [2024-12-05 12:59:38.285735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.053 [2024-12-05 12:59:38.424009] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:56.053 [2024-12-05 12:59:38.424070] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:56.617 12:59:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:56.617 12:59:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:27:56.617 12:59:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:27:56.617 12:59:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:56.617 12:59:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:27:56.617 12:59:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:27:56.617 12:59:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:27:56.617 12:59:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:56.617 12:59:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:56.617 12:59:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:56.617 12:59:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:27:56.617 12:59:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.617 12:59:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:56.617 malloc1 00:27:56.617 12:59:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.617 12:59:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:56.617 12:59:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.617 12:59:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:56.617 [2024-12-05 12:59:38.971426] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:56.617 [2024-12-05 12:59:38.971484] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:56.617 [2024-12-05 12:59:38.971516] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:27:56.617 [2024-12-05 12:59:38.971526] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:56.617 [2024-12-05 12:59:38.973423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:56.617 [2024-12-05 12:59:38.973461] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:56.617 pt1 00:27:56.617 12:59:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.617 12:59:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:56.617 12:59:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:56.617 12:59:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:27:56.617 12:59:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:27:56.617 12:59:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:27:56.617 12:59:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:56.617 12:59:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:56.617 12:59:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:56.617 12:59:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:27:56.617 12:59:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.617 12:59:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:56.617 malloc2 00:27:56.617 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.617 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:56.617 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.617 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:56.617 [2024-12-05 12:59:39.015655] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:56.617 [2024-12-05 12:59:39.015714] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:56.617 [2024-12-05 12:59:39.015735] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:56.617 [2024-12-05 12:59:39.015746] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:56.617 [2024-12-05 12:59:39.017614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:56.617 [2024-12-05 12:59:39.017647] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:56.617 pt2 00:27:56.617 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.617 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:56.617 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:56.617 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:27:56.617 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.617 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:56.617 [2024-12-05 12:59:39.023682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:56.617 [2024-12-05 12:59:39.025677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:56.617 [2024-12-05 12:59:39.025935] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:27:56.617 [2024-12-05 12:59:39.026010] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:27:56.617 [2024-12-05 12:59:39.026111] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:27:56.617 [2024-12-05 12:59:39.026304] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:27:56.617 [2024-12-05 12:59:39.026361] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:27:56.617 [2024-12-05 12:59:39.026506] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:56.617 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.617 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:56.617 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:56.617 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:56.617 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:56.617 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:56.617 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:56.617 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:56.617 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:56.617 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:56.617 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:56.617 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:56.617 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.617 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:56.617 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:56.617 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.617 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:56.618 "name": "raid_bdev1", 00:27:56.618 "uuid": "59cce37d-8689-4c24-812c-828866c6c6fa", 00:27:56.618 "strip_size_kb": 0, 00:27:56.618 "state": "online", 00:27:56.618 "raid_level": "raid1", 00:27:56.618 "superblock": true, 00:27:56.618 "num_base_bdevs": 2, 00:27:56.618 "num_base_bdevs_discovered": 2, 00:27:56.618 "num_base_bdevs_operational": 2, 00:27:56.618 "base_bdevs_list": [ 00:27:56.618 { 00:27:56.618 "name": "pt1", 00:27:56.618 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:56.618 "is_configured": true, 00:27:56.618 "data_offset": 256, 00:27:56.618 "data_size": 7936 00:27:56.618 }, 00:27:56.618 { 00:27:56.618 "name": "pt2", 00:27:56.618 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:56.618 "is_configured": true, 00:27:56.618 "data_offset": 256, 00:27:56.618 "data_size": 7936 00:27:56.618 } 00:27:56.618 ] 00:27:56.618 }' 00:27:56.618 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:56.618 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:56.875 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:27:56.875 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:27:56.875 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:56.875 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:56.875 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:27:56.875 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:56.875 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:56.875 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.875 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:56.875 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:56.875 [2024-12-05 12:59:39.348058] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:56.875 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.875 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:56.875 "name": "raid_bdev1", 00:27:56.875 "aliases": [ 00:27:56.875 "59cce37d-8689-4c24-812c-828866c6c6fa" 00:27:56.875 ], 00:27:56.875 "product_name": "Raid Volume", 00:27:56.875 "block_size": 4128, 00:27:56.875 "num_blocks": 7936, 00:27:56.875 "uuid": "59cce37d-8689-4c24-812c-828866c6c6fa", 00:27:56.875 "md_size": 32, 00:27:56.875 "md_interleave": true, 00:27:56.875 "dif_type": 0, 00:27:56.875 "assigned_rate_limits": { 00:27:56.875 "rw_ios_per_sec": 0, 00:27:56.875 "rw_mbytes_per_sec": 0, 00:27:56.875 "r_mbytes_per_sec": 0, 00:27:56.875 "w_mbytes_per_sec": 0 00:27:56.875 }, 00:27:56.875 "claimed": false, 00:27:56.875 "zoned": false, 00:27:56.875 "supported_io_types": { 00:27:56.875 "read": true, 00:27:56.875 "write": true, 00:27:56.875 "unmap": false, 00:27:56.875 "flush": false, 00:27:56.875 "reset": true, 00:27:56.875 "nvme_admin": false, 00:27:56.875 "nvme_io": false, 00:27:56.875 "nvme_io_md": false, 00:27:56.875 "write_zeroes": true, 00:27:56.875 "zcopy": false, 00:27:56.875 "get_zone_info": false, 00:27:56.875 "zone_management": false, 00:27:56.875 "zone_append": false, 00:27:56.875 "compare": false, 00:27:56.875 "compare_and_write": false, 00:27:56.875 "abort": false, 00:27:56.875 "seek_hole": false, 00:27:56.875 "seek_data": false, 00:27:56.875 "copy": false, 00:27:56.875 "nvme_iov_md": false 00:27:56.875 }, 00:27:56.875 "memory_domains": [ 00:27:56.875 { 00:27:56.875 "dma_device_id": "system", 00:27:56.875 "dma_device_type": 1 00:27:56.875 }, 00:27:56.875 { 00:27:56.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:56.875 "dma_device_type": 2 00:27:56.875 }, 00:27:56.875 { 00:27:56.875 "dma_device_id": "system", 00:27:56.875 "dma_device_type": 1 00:27:56.875 }, 00:27:56.875 { 00:27:56.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:56.875 "dma_device_type": 2 00:27:56.875 } 00:27:56.875 ], 00:27:56.875 "driver_specific": { 00:27:56.875 "raid": { 00:27:56.875 "uuid": "59cce37d-8689-4c24-812c-828866c6c6fa", 00:27:56.875 "strip_size_kb": 0, 00:27:56.875 "state": "online", 00:27:56.875 "raid_level": "raid1", 00:27:56.875 "superblock": true, 00:27:56.875 "num_base_bdevs": 2, 00:27:56.875 "num_base_bdevs_discovered": 2, 00:27:56.875 "num_base_bdevs_operational": 2, 00:27:56.875 "base_bdevs_list": [ 00:27:56.875 { 00:27:56.875 "name": "pt1", 00:27:56.875 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:56.875 "is_configured": true, 00:27:56.875 "data_offset": 256, 00:27:56.875 "data_size": 7936 00:27:56.875 }, 00:27:56.875 { 00:27:56.875 "name": "pt2", 00:27:56.875 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:56.875 "is_configured": true, 00:27:56.875 "data_offset": 256, 00:27:56.875 "data_size": 7936 00:27:56.875 } 00:27:56.875 ] 00:27:56.875 } 00:27:56.875 } 00:27:56.875 }' 00:27:56.875 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:56.875 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:27:56.875 pt2' 00:27:56.875 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:56.875 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:27:56.875 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:56.875 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:27:56.875 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.875 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:56.875 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:56.875 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:27:57.134 [2024-12-05 12:59:39.512055] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=59cce37d-8689-4c24-812c-828866c6c6fa 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 59cce37d-8689-4c24-812c-828866c6c6fa ']' 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:57.134 [2024-12-05 12:59:39.547771] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:57.134 [2024-12-05 12:59:39.547889] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:57.134 [2024-12-05 12:59:39.547980] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:57.134 [2024-12-05 12:59:39.548041] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:57.134 [2024-12-05 12:59:39.548053] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:57.134 [2024-12-05 12:59:39.643810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:27:57.134 [2024-12-05 12:59:39.645724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:27:57.134 [2024-12-05 12:59:39.645895] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:27:57.134 [2024-12-05 12:59:39.645953] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:27:57.134 [2024-12-05 12:59:39.645968] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:57.134 [2024-12-05 12:59:39.645978] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:27:57.134 request: 00:27:57.134 { 00:27:57.134 "name": "raid_bdev1", 00:27:57.134 "raid_level": "raid1", 00:27:57.134 "base_bdevs": [ 00:27:57.134 "malloc1", 00:27:57.134 "malloc2" 00:27:57.134 ], 00:27:57.134 "superblock": false, 00:27:57.134 "method": "bdev_raid_create", 00:27:57.134 "req_id": 1 00:27:57.134 } 00:27:57.134 Got JSON-RPC error response 00:27:57.134 response: 00:27:57.134 { 00:27:57.134 "code": -17, 00:27:57.134 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:27:57.134 } 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.134 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:57.134 [2024-12-05 12:59:39.687802] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:57.134 [2024-12-05 12:59:39.687847] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:57.134 [2024-12-05 12:59:39.687862] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:27:57.135 [2024-12-05 12:59:39.687872] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:57.135 [2024-12-05 12:59:39.689775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:57.135 [2024-12-05 12:59:39.689810] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:57.135 [2024-12-05 12:59:39.689856] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:57.135 [2024-12-05 12:59:39.689908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:57.135 pt1 00:27:57.135 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.135 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:27:57.135 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:57.135 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:57.135 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:57.135 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:57.135 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:57.135 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:57.135 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:57.135 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:57.135 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:57.135 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:57.135 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.135 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:57.135 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:57.135 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.392 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:57.392 "name": "raid_bdev1", 00:27:57.392 "uuid": "59cce37d-8689-4c24-812c-828866c6c6fa", 00:27:57.392 "strip_size_kb": 0, 00:27:57.392 "state": "configuring", 00:27:57.392 "raid_level": "raid1", 00:27:57.392 "superblock": true, 00:27:57.392 "num_base_bdevs": 2, 00:27:57.392 "num_base_bdevs_discovered": 1, 00:27:57.392 "num_base_bdevs_operational": 2, 00:27:57.392 "base_bdevs_list": [ 00:27:57.392 { 00:27:57.392 "name": "pt1", 00:27:57.392 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:57.392 "is_configured": true, 00:27:57.392 "data_offset": 256, 00:27:57.392 "data_size": 7936 00:27:57.392 }, 00:27:57.392 { 00:27:57.392 "name": null, 00:27:57.392 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:57.392 "is_configured": false, 00:27:57.392 "data_offset": 256, 00:27:57.392 "data_size": 7936 00:27:57.392 } 00:27:57.392 ] 00:27:57.392 }' 00:27:57.392 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:57.392 12:59:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:57.650 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:27:57.650 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:27:57.650 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:57.650 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:57.650 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.650 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:57.650 [2024-12-05 12:59:40.011902] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:57.650 [2024-12-05 12:59:40.011970] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:57.650 [2024-12-05 12:59:40.011990] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:27:57.650 [2024-12-05 12:59:40.012001] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:57.650 [2024-12-05 12:59:40.012157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:57.650 [2024-12-05 12:59:40.012175] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:57.650 [2024-12-05 12:59:40.012220] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:57.650 [2024-12-05 12:59:40.012242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:57.650 [2024-12-05 12:59:40.012324] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:57.650 [2024-12-05 12:59:40.012335] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:27:57.650 [2024-12-05 12:59:40.012399] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:27:57.650 [2024-12-05 12:59:40.012457] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:57.650 [2024-12-05 12:59:40.012465] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:27:57.650 [2024-12-05 12:59:40.012538] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:57.650 pt2 00:27:57.650 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.650 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:27:57.650 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:57.650 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:57.650 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:57.650 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:57.650 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:57.650 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:57.650 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:57.650 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:57.650 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:57.650 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:57.650 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:57.650 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:57.650 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.650 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:57.650 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:57.650 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.650 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:57.650 "name": "raid_bdev1", 00:27:57.650 "uuid": "59cce37d-8689-4c24-812c-828866c6c6fa", 00:27:57.650 "strip_size_kb": 0, 00:27:57.650 "state": "online", 00:27:57.650 "raid_level": "raid1", 00:27:57.650 "superblock": true, 00:27:57.650 "num_base_bdevs": 2, 00:27:57.650 "num_base_bdevs_discovered": 2, 00:27:57.650 "num_base_bdevs_operational": 2, 00:27:57.650 "base_bdevs_list": [ 00:27:57.650 { 00:27:57.650 "name": "pt1", 00:27:57.650 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:57.650 "is_configured": true, 00:27:57.650 "data_offset": 256, 00:27:57.650 "data_size": 7936 00:27:57.650 }, 00:27:57.650 { 00:27:57.650 "name": "pt2", 00:27:57.650 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:57.650 "is_configured": true, 00:27:57.650 "data_offset": 256, 00:27:57.650 "data_size": 7936 00:27:57.650 } 00:27:57.650 ] 00:27:57.650 }' 00:27:57.650 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:57.650 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:57.909 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:27:57.909 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:27:57.909 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:57.909 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:57.909 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:27:57.909 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:57.909 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:57.909 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:57.909 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.909 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:57.909 [2024-12-05 12:59:40.340257] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:57.909 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.909 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:57.909 "name": "raid_bdev1", 00:27:57.909 "aliases": [ 00:27:57.909 "59cce37d-8689-4c24-812c-828866c6c6fa" 00:27:57.909 ], 00:27:57.909 "product_name": "Raid Volume", 00:27:57.909 "block_size": 4128, 00:27:57.909 "num_blocks": 7936, 00:27:57.909 "uuid": "59cce37d-8689-4c24-812c-828866c6c6fa", 00:27:57.909 "md_size": 32, 00:27:57.909 "md_interleave": true, 00:27:57.909 "dif_type": 0, 00:27:57.909 "assigned_rate_limits": { 00:27:57.909 "rw_ios_per_sec": 0, 00:27:57.909 "rw_mbytes_per_sec": 0, 00:27:57.909 "r_mbytes_per_sec": 0, 00:27:57.909 "w_mbytes_per_sec": 0 00:27:57.909 }, 00:27:57.909 "claimed": false, 00:27:57.909 "zoned": false, 00:27:57.909 "supported_io_types": { 00:27:57.909 "read": true, 00:27:57.909 "write": true, 00:27:57.909 "unmap": false, 00:27:57.909 "flush": false, 00:27:57.909 "reset": true, 00:27:57.909 "nvme_admin": false, 00:27:57.909 "nvme_io": false, 00:27:57.909 "nvme_io_md": false, 00:27:57.909 "write_zeroes": true, 00:27:57.909 "zcopy": false, 00:27:57.909 "get_zone_info": false, 00:27:57.909 "zone_management": false, 00:27:57.909 "zone_append": false, 00:27:57.909 "compare": false, 00:27:57.909 "compare_and_write": false, 00:27:57.909 "abort": false, 00:27:57.909 "seek_hole": false, 00:27:57.909 "seek_data": false, 00:27:57.909 "copy": false, 00:27:57.909 "nvme_iov_md": false 00:27:57.909 }, 00:27:57.909 "memory_domains": [ 00:27:57.909 { 00:27:57.909 "dma_device_id": "system", 00:27:57.909 "dma_device_type": 1 00:27:57.909 }, 00:27:57.909 { 00:27:57.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:57.909 "dma_device_type": 2 00:27:57.909 }, 00:27:57.909 { 00:27:57.909 "dma_device_id": "system", 00:27:57.909 "dma_device_type": 1 00:27:57.909 }, 00:27:57.909 { 00:27:57.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:57.909 "dma_device_type": 2 00:27:57.909 } 00:27:57.909 ], 00:27:57.909 "driver_specific": { 00:27:57.909 "raid": { 00:27:57.909 "uuid": "59cce37d-8689-4c24-812c-828866c6c6fa", 00:27:57.909 "strip_size_kb": 0, 00:27:57.909 "state": "online", 00:27:57.909 "raid_level": "raid1", 00:27:57.909 "superblock": true, 00:27:57.909 "num_base_bdevs": 2, 00:27:57.909 "num_base_bdevs_discovered": 2, 00:27:57.909 "num_base_bdevs_operational": 2, 00:27:57.909 "base_bdevs_list": [ 00:27:57.909 { 00:27:57.909 "name": "pt1", 00:27:57.909 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:57.909 "is_configured": true, 00:27:57.909 "data_offset": 256, 00:27:57.909 "data_size": 7936 00:27:57.909 }, 00:27:57.909 { 00:27:57.909 "name": "pt2", 00:27:57.909 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:57.909 "is_configured": true, 00:27:57.909 "data_offset": 256, 00:27:57.909 "data_size": 7936 00:27:57.909 } 00:27:57.909 ] 00:27:57.909 } 00:27:57.909 } 00:27:57.909 }' 00:27:57.909 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:57.909 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:27:57.909 pt2' 00:27:57.909 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:57.909 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:27:57.909 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:57.909 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:27:57.909 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.909 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:57.909 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:57.909 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.909 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:27:57.909 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:27:57.909 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:57.909 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:57.909 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:27:57.909 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.909 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:57.909 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.909 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:27:57.909 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:27:57.909 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:57.909 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.909 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:57.909 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:27:57.909 [2024-12-05 12:59:40.492290] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:58.172 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.172 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 59cce37d-8689-4c24-812c-828866c6c6fa '!=' 59cce37d-8689-4c24-812c-828866c6c6fa ']' 00:27:58.172 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:27:58.172 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:58.172 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:27:58.172 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:27:58.172 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.172 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:58.172 [2024-12-05 12:59:40.524043] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:27:58.172 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.172 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:58.172 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:58.172 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:58.172 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:58.172 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:58.172 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:58.172 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:58.172 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:58.172 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:58.172 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:58.172 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:58.172 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:58.172 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.172 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:58.172 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.172 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:58.172 "name": "raid_bdev1", 00:27:58.172 "uuid": "59cce37d-8689-4c24-812c-828866c6c6fa", 00:27:58.172 "strip_size_kb": 0, 00:27:58.172 "state": "online", 00:27:58.172 "raid_level": "raid1", 00:27:58.172 "superblock": true, 00:27:58.172 "num_base_bdevs": 2, 00:27:58.172 "num_base_bdevs_discovered": 1, 00:27:58.172 "num_base_bdevs_operational": 1, 00:27:58.172 "base_bdevs_list": [ 00:27:58.172 { 00:27:58.172 "name": null, 00:27:58.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:58.172 "is_configured": false, 00:27:58.172 "data_offset": 0, 00:27:58.172 "data_size": 7936 00:27:58.172 }, 00:27:58.172 { 00:27:58.172 "name": "pt2", 00:27:58.172 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:58.172 "is_configured": true, 00:27:58.172 "data_offset": 256, 00:27:58.172 "data_size": 7936 00:27:58.172 } 00:27:58.172 ] 00:27:58.172 }' 00:27:58.172 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:58.172 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:58.430 [2024-12-05 12:59:40.844079] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:58.430 [2024-12-05 12:59:40.844102] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:58.430 [2024-12-05 12:59:40.844165] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:58.430 [2024-12-05 12:59:40.844208] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:58.430 [2024-12-05 12:59:40.844219] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:58.430 [2024-12-05 12:59:40.896096] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:58.430 [2024-12-05 12:59:40.896146] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:58.430 [2024-12-05 12:59:40.896161] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:27:58.430 [2024-12-05 12:59:40.896171] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:58.430 [2024-12-05 12:59:40.898630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:58.430 [2024-12-05 12:59:40.898675] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:58.430 [2024-12-05 12:59:40.898731] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:58.430 [2024-12-05 12:59:40.898777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:58.430 [2024-12-05 12:59:40.898843] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:27:58.430 [2024-12-05 12:59:40.898855] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:27:58.430 [2024-12-05 12:59:40.898942] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:58.430 [2024-12-05 12:59:40.899002] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:27:58.430 [2024-12-05 12:59:40.899010] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:27:58.430 [2024-12-05 12:59:40.899073] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:58.430 pt2 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:58.430 "name": "raid_bdev1", 00:27:58.430 "uuid": "59cce37d-8689-4c24-812c-828866c6c6fa", 00:27:58.430 "strip_size_kb": 0, 00:27:58.430 "state": "online", 00:27:58.430 "raid_level": "raid1", 00:27:58.430 "superblock": true, 00:27:58.430 "num_base_bdevs": 2, 00:27:58.430 "num_base_bdevs_discovered": 1, 00:27:58.430 "num_base_bdevs_operational": 1, 00:27:58.430 "base_bdevs_list": [ 00:27:58.430 { 00:27:58.430 "name": null, 00:27:58.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:58.430 "is_configured": false, 00:27:58.430 "data_offset": 256, 00:27:58.430 "data_size": 7936 00:27:58.430 }, 00:27:58.430 { 00:27:58.430 "name": "pt2", 00:27:58.430 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:58.430 "is_configured": true, 00:27:58.430 "data_offset": 256, 00:27:58.430 "data_size": 7936 00:27:58.430 } 00:27:58.430 ] 00:27:58.430 }' 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:58.430 12:59:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:58.689 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:58.689 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.689 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:58.689 [2024-12-05 12:59:41.204164] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:58.689 [2024-12-05 12:59:41.204309] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:58.689 [2024-12-05 12:59:41.204386] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:58.689 [2024-12-05 12:59:41.204439] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:58.689 [2024-12-05 12:59:41.204448] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:27:58.689 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.689 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:27:58.689 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:58.689 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.689 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:58.689 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.689 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:27:58.689 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:27:58.689 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:27:58.689 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:58.689 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.689 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:58.689 [2024-12-05 12:59:41.244195] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:58.689 [2024-12-05 12:59:41.244247] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:58.689 [2024-12-05 12:59:41.244265] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:27:58.689 [2024-12-05 12:59:41.244273] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:58.689 [2024-12-05 12:59:41.246220] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:58.689 [2024-12-05 12:59:41.246255] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:58.689 [2024-12-05 12:59:41.246304] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:58.689 [2024-12-05 12:59:41.246344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:58.689 [2024-12-05 12:59:41.246432] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:27:58.689 [2024-12-05 12:59:41.246442] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:58.689 [2024-12-05 12:59:41.246458] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:27:58.689 [2024-12-05 12:59:41.246523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:58.689 [2024-12-05 12:59:41.246591] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:27:58.689 [2024-12-05 12:59:41.246600] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:27:58.689 [2024-12-05 12:59:41.246662] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:27:58.689 [2024-12-05 12:59:41.246763] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:27:58.689 [2024-12-05 12:59:41.246775] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:27:58.689 [2024-12-05 12:59:41.246844] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:58.689 pt1 00:27:58.689 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.689 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:27:58.689 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:58.689 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:58.689 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:58.689 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:58.689 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:58.689 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:58.689 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:58.689 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:58.689 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:58.689 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:58.689 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:58.689 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:58.689 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.689 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:58.689 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.947 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:58.947 "name": "raid_bdev1", 00:27:58.947 "uuid": "59cce37d-8689-4c24-812c-828866c6c6fa", 00:27:58.947 "strip_size_kb": 0, 00:27:58.947 "state": "online", 00:27:58.947 "raid_level": "raid1", 00:27:58.947 "superblock": true, 00:27:58.947 "num_base_bdevs": 2, 00:27:58.947 "num_base_bdevs_discovered": 1, 00:27:58.947 "num_base_bdevs_operational": 1, 00:27:58.947 "base_bdevs_list": [ 00:27:58.947 { 00:27:58.947 "name": null, 00:27:58.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:58.947 "is_configured": false, 00:27:58.947 "data_offset": 256, 00:27:58.947 "data_size": 7936 00:27:58.947 }, 00:27:58.947 { 00:27:58.947 "name": "pt2", 00:27:58.947 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:58.947 "is_configured": true, 00:27:58.947 "data_offset": 256, 00:27:58.947 "data_size": 7936 00:27:58.947 } 00:27:58.947 ] 00:27:58.947 }' 00:27:58.947 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:58.947 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:59.206 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:27:59.206 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:27:59.206 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.206 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:59.206 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.206 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:27:59.206 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:59.206 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:27:59.206 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.206 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:59.207 [2024-12-05 12:59:41.584511] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:59.207 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.207 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 59cce37d-8689-4c24-812c-828866c6c6fa '!=' 59cce37d-8689-4c24-812c-828866c6c6fa ']' 00:27:59.207 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 85881 00:27:59.207 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 85881 ']' 00:27:59.207 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 85881 00:27:59.207 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:27:59.207 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:59.207 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85881 00:27:59.207 killing process with pid 85881 00:27:59.207 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:59.207 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:59.207 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85881' 00:27:59.207 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 85881 00:27:59.207 [2024-12-05 12:59:41.631878] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:59.207 12:59:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 85881 00:27:59.207 [2024-12-05 12:59:41.631949] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:59.207 [2024-12-05 12:59:41.631996] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:59.207 [2024-12-05 12:59:41.632009] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:27:59.207 [2024-12-05 12:59:41.761086] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:00.141 12:59:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:28:00.141 00:28:00.141 real 0m4.508s 00:28:00.141 user 0m6.858s 00:28:00.141 sys 0m0.690s 00:28:00.141 12:59:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:00.141 12:59:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:00.141 ************************************ 00:28:00.141 END TEST raid_superblock_test_md_interleaved 00:28:00.141 ************************************ 00:28:00.141 12:59:42 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:28:00.141 12:59:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:28:00.141 12:59:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:00.141 12:59:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:00.141 ************************************ 00:28:00.141 START TEST raid_rebuild_test_sb_md_interleaved 00:28:00.141 ************************************ 00:28:00.141 12:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:28:00.141 12:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:28:00.141 12:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:28:00.141 12:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:28:00.141 12:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:28:00.141 12:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:28:00.141 12:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:28:00.141 12:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:00.141 12:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:28:00.141 12:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:00.141 12:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:00.141 12:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:28:00.141 12:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:00.141 12:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:00.141 12:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:00.141 12:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:28:00.141 12:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:28:00.141 12:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:28:00.141 12:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:28:00.141 12:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:28:00.141 12:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:28:00.141 12:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:28:00.141 12:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:28:00.141 12:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:28:00.141 12:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:28:00.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:00.141 12:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=86199 00:28:00.141 12:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 86199 00:28:00.141 12:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 86199 ']' 00:28:00.141 12:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:00.141 12:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:00.141 12:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:00.141 12:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:00.141 12:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:00.141 12:59:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:00.141 [2024-12-05 12:59:42.588666] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:28:00.141 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:00.141 Zero copy mechanism will not be used. 00:28:00.142 [2024-12-05 12:59:42.589027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86199 ] 00:28:00.400 [2024-12-05 12:59:42.753233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.400 [2024-12-05 12:59:42.852371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.729 [2024-12-05 12:59:42.987749] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:00.729 [2024-12-05 12:59:42.987794] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:00.987 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:00.987 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:28:00.987 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:00.987 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:28:00.987 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.987 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:00.987 BaseBdev1_malloc 00:28:00.987 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.987 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:00.987 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.987 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:00.987 [2024-12-05 12:59:43.549380] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:00.987 [2024-12-05 12:59:43.549440] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:00.987 [2024-12-05 12:59:43.549462] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:28:00.987 [2024-12-05 12:59:43.549473] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:00.987 [2024-12-05 12:59:43.551348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:00.987 [2024-12-05 12:59:43.551386] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:00.987 BaseBdev1 00:28:00.987 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.987 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:00.987 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:28:00.987 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.987 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:01.247 BaseBdev2_malloc 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:01.247 [2024-12-05 12:59:43.589668] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:01.247 [2024-12-05 12:59:43.589726] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:01.247 [2024-12-05 12:59:43.589744] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:28:01.247 [2024-12-05 12:59:43.589756] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:01.247 [2024-12-05 12:59:43.591616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:01.247 [2024-12-05 12:59:43.591650] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:01.247 BaseBdev2 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:01.247 spare_malloc 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:01.247 spare_delay 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:01.247 [2024-12-05 12:59:43.646029] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:01.247 [2024-12-05 12:59:43.646088] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:01.247 [2024-12-05 12:59:43.646107] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:28:01.247 [2024-12-05 12:59:43.646118] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:01.247 [2024-12-05 12:59:43.648004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:01.247 [2024-12-05 12:59:43.648040] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:01.247 spare 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:01.247 [2024-12-05 12:59:43.654070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:01.247 [2024-12-05 12:59:43.655884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:01.247 [2024-12-05 12:59:43.656067] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:28:01.247 [2024-12-05 12:59:43.656081] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:01.247 [2024-12-05 12:59:43.656154] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:28:01.247 [2024-12-05 12:59:43.656239] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:28:01.247 [2024-12-05 12:59:43.656247] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:28:01.247 [2024-12-05 12:59:43.656314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:01.247 "name": "raid_bdev1", 00:28:01.247 "uuid": "4fec660c-e444-4342-b062-87678bd91da3", 00:28:01.247 "strip_size_kb": 0, 00:28:01.247 "state": "online", 00:28:01.247 "raid_level": "raid1", 00:28:01.247 "superblock": true, 00:28:01.247 "num_base_bdevs": 2, 00:28:01.247 "num_base_bdevs_discovered": 2, 00:28:01.247 "num_base_bdevs_operational": 2, 00:28:01.247 "base_bdevs_list": [ 00:28:01.247 { 00:28:01.247 "name": "BaseBdev1", 00:28:01.247 "uuid": "c47d6d76-84a7-5cc3-91f0-60fd6a2e45de", 00:28:01.247 "is_configured": true, 00:28:01.247 "data_offset": 256, 00:28:01.247 "data_size": 7936 00:28:01.247 }, 00:28:01.247 { 00:28:01.247 "name": "BaseBdev2", 00:28:01.247 "uuid": "d2e012e0-aed8-5be4-8fa9-49cf2722b559", 00:28:01.247 "is_configured": true, 00:28:01.247 "data_offset": 256, 00:28:01.247 "data_size": 7936 00:28:01.247 } 00:28:01.247 ] 00:28:01.247 }' 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:01.247 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:01.506 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:28:01.506 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:01.506 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.506 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:01.506 [2024-12-05 12:59:43.974437] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:01.506 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.506 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:28:01.506 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:01.506 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:01.506 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.506 12:59:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:01.506 12:59:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.506 12:59:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:28:01.506 12:59:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:28:01.506 12:59:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:28:01.506 12:59:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:28:01.506 12:59:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.506 12:59:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:01.506 [2024-12-05 12:59:44.030140] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:01.506 12:59:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.506 12:59:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:01.506 12:59:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:01.506 12:59:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:01.506 12:59:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:01.506 12:59:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:01.506 12:59:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:01.506 12:59:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:01.506 12:59:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:01.506 12:59:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:01.506 12:59:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:01.506 12:59:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:01.506 12:59:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.506 12:59:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:01.506 12:59:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:01.506 12:59:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.506 12:59:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:01.506 "name": "raid_bdev1", 00:28:01.506 "uuid": "4fec660c-e444-4342-b062-87678bd91da3", 00:28:01.506 "strip_size_kb": 0, 00:28:01.506 "state": "online", 00:28:01.506 "raid_level": "raid1", 00:28:01.506 "superblock": true, 00:28:01.506 "num_base_bdevs": 2, 00:28:01.506 "num_base_bdevs_discovered": 1, 00:28:01.506 "num_base_bdevs_operational": 1, 00:28:01.506 "base_bdevs_list": [ 00:28:01.506 { 00:28:01.506 "name": null, 00:28:01.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:01.506 "is_configured": false, 00:28:01.506 "data_offset": 0, 00:28:01.506 "data_size": 7936 00:28:01.506 }, 00:28:01.506 { 00:28:01.506 "name": "BaseBdev2", 00:28:01.506 "uuid": "d2e012e0-aed8-5be4-8fa9-49cf2722b559", 00:28:01.506 "is_configured": true, 00:28:01.506 "data_offset": 256, 00:28:01.506 "data_size": 7936 00:28:01.506 } 00:28:01.506 ] 00:28:01.506 }' 00:28:01.506 12:59:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:01.506 12:59:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:01.764 12:59:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:01.764 12:59:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.764 12:59:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:01.764 [2024-12-05 12:59:44.334231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:01.764 [2024-12-05 12:59:44.345908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:28:01.764 12:59:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.764 12:59:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:28:01.764 [2024-12-05 12:59:44.347787] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:03.140 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:03.140 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:03.140 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:03.140 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:03.140 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:03.140 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:03.140 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.140 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:03.140 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:03.140 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.140 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:03.140 "name": "raid_bdev1", 00:28:03.140 "uuid": "4fec660c-e444-4342-b062-87678bd91da3", 00:28:03.140 "strip_size_kb": 0, 00:28:03.140 "state": "online", 00:28:03.140 "raid_level": "raid1", 00:28:03.140 "superblock": true, 00:28:03.140 "num_base_bdevs": 2, 00:28:03.140 "num_base_bdevs_discovered": 2, 00:28:03.140 "num_base_bdevs_operational": 2, 00:28:03.140 "process": { 00:28:03.140 "type": "rebuild", 00:28:03.140 "target": "spare", 00:28:03.140 "progress": { 00:28:03.140 "blocks": 2560, 00:28:03.140 "percent": 32 00:28:03.140 } 00:28:03.140 }, 00:28:03.140 "base_bdevs_list": [ 00:28:03.140 { 00:28:03.140 "name": "spare", 00:28:03.140 "uuid": "24668838-9f33-5ef3-82fc-9e0a95a507e9", 00:28:03.140 "is_configured": true, 00:28:03.140 "data_offset": 256, 00:28:03.140 "data_size": 7936 00:28:03.140 }, 00:28:03.140 { 00:28:03.140 "name": "BaseBdev2", 00:28:03.140 "uuid": "d2e012e0-aed8-5be4-8fa9-49cf2722b559", 00:28:03.140 "is_configured": true, 00:28:03.140 "data_offset": 256, 00:28:03.140 "data_size": 7936 00:28:03.140 } 00:28:03.140 ] 00:28:03.140 }' 00:28:03.140 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:03.140 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:03.140 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:03.140 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:03.140 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:28:03.140 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.140 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:03.140 [2024-12-05 12:59:45.461910] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:03.140 [2024-12-05 12:59:45.553923] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:03.140 [2024-12-05 12:59:45.554000] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:03.140 [2024-12-05 12:59:45.554014] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:03.140 [2024-12-05 12:59:45.554024] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:03.140 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.140 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:03.140 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:03.140 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:03.140 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:03.140 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:03.140 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:03.140 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:03.140 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:03.140 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:03.140 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:03.140 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:03.140 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.140 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:03.140 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:03.140 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.140 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:03.140 "name": "raid_bdev1", 00:28:03.140 "uuid": "4fec660c-e444-4342-b062-87678bd91da3", 00:28:03.140 "strip_size_kb": 0, 00:28:03.140 "state": "online", 00:28:03.140 "raid_level": "raid1", 00:28:03.140 "superblock": true, 00:28:03.140 "num_base_bdevs": 2, 00:28:03.140 "num_base_bdevs_discovered": 1, 00:28:03.140 "num_base_bdevs_operational": 1, 00:28:03.140 "base_bdevs_list": [ 00:28:03.140 { 00:28:03.140 "name": null, 00:28:03.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:03.140 "is_configured": false, 00:28:03.140 "data_offset": 0, 00:28:03.140 "data_size": 7936 00:28:03.140 }, 00:28:03.140 { 00:28:03.140 "name": "BaseBdev2", 00:28:03.140 "uuid": "d2e012e0-aed8-5be4-8fa9-49cf2722b559", 00:28:03.140 "is_configured": true, 00:28:03.140 "data_offset": 256, 00:28:03.140 "data_size": 7936 00:28:03.140 } 00:28:03.140 ] 00:28:03.140 }' 00:28:03.140 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:03.140 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:03.398 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:03.398 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:03.398 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:03.398 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:03.398 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:03.398 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:03.398 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.398 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:03.398 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:03.398 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.398 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:03.398 "name": "raid_bdev1", 00:28:03.398 "uuid": "4fec660c-e444-4342-b062-87678bd91da3", 00:28:03.398 "strip_size_kb": 0, 00:28:03.398 "state": "online", 00:28:03.398 "raid_level": "raid1", 00:28:03.398 "superblock": true, 00:28:03.398 "num_base_bdevs": 2, 00:28:03.398 "num_base_bdevs_discovered": 1, 00:28:03.398 "num_base_bdevs_operational": 1, 00:28:03.398 "base_bdevs_list": [ 00:28:03.398 { 00:28:03.398 "name": null, 00:28:03.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:03.398 "is_configured": false, 00:28:03.398 "data_offset": 0, 00:28:03.398 "data_size": 7936 00:28:03.398 }, 00:28:03.398 { 00:28:03.398 "name": "BaseBdev2", 00:28:03.398 "uuid": "d2e012e0-aed8-5be4-8fa9-49cf2722b559", 00:28:03.398 "is_configured": true, 00:28:03.398 "data_offset": 256, 00:28:03.398 "data_size": 7936 00:28:03.398 } 00:28:03.398 ] 00:28:03.398 }' 00:28:03.398 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:03.398 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:03.398 12:59:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:03.655 12:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:03.655 12:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:03.655 12:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.655 12:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:03.655 [2024-12-05 12:59:46.017253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:03.655 [2024-12-05 12:59:46.027878] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:28:03.655 12:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.655 12:59:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:28:03.655 [2024-12-05 12:59:46.029840] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:04.588 12:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:04.588 12:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:04.588 12:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:04.588 12:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:04.588 12:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:04.588 12:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:04.588 12:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:04.588 12:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.588 12:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:04.588 12:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.589 12:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:04.589 "name": "raid_bdev1", 00:28:04.589 "uuid": "4fec660c-e444-4342-b062-87678bd91da3", 00:28:04.589 "strip_size_kb": 0, 00:28:04.589 "state": "online", 00:28:04.589 "raid_level": "raid1", 00:28:04.589 "superblock": true, 00:28:04.589 "num_base_bdevs": 2, 00:28:04.589 "num_base_bdevs_discovered": 2, 00:28:04.589 "num_base_bdevs_operational": 2, 00:28:04.589 "process": { 00:28:04.589 "type": "rebuild", 00:28:04.589 "target": "spare", 00:28:04.589 "progress": { 00:28:04.589 "blocks": 2560, 00:28:04.589 "percent": 32 00:28:04.589 } 00:28:04.589 }, 00:28:04.589 "base_bdevs_list": [ 00:28:04.589 { 00:28:04.589 "name": "spare", 00:28:04.589 "uuid": "24668838-9f33-5ef3-82fc-9e0a95a507e9", 00:28:04.589 "is_configured": true, 00:28:04.589 "data_offset": 256, 00:28:04.589 "data_size": 7936 00:28:04.589 }, 00:28:04.589 { 00:28:04.589 "name": "BaseBdev2", 00:28:04.589 "uuid": "d2e012e0-aed8-5be4-8fa9-49cf2722b559", 00:28:04.589 "is_configured": true, 00:28:04.589 "data_offset": 256, 00:28:04.589 "data_size": 7936 00:28:04.589 } 00:28:04.589 ] 00:28:04.589 }' 00:28:04.589 12:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:04.589 12:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:04.589 12:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:04.589 12:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:04.589 12:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:28:04.589 12:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:28:04.589 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:28:04.589 12:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:28:04.589 12:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:28:04.589 12:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:28:04.589 12:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=580 00:28:04.589 12:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:04.589 12:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:04.589 12:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:04.589 12:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:04.589 12:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:04.589 12:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:04.589 12:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:04.589 12:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.589 12:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:04.589 12:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:04.589 12:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.848 12:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:04.849 "name": "raid_bdev1", 00:28:04.849 "uuid": "4fec660c-e444-4342-b062-87678bd91da3", 00:28:04.849 "strip_size_kb": 0, 00:28:04.849 "state": "online", 00:28:04.849 "raid_level": "raid1", 00:28:04.849 "superblock": true, 00:28:04.849 "num_base_bdevs": 2, 00:28:04.849 "num_base_bdevs_discovered": 2, 00:28:04.849 "num_base_bdevs_operational": 2, 00:28:04.849 "process": { 00:28:04.849 "type": "rebuild", 00:28:04.849 "target": "spare", 00:28:04.849 "progress": { 00:28:04.849 "blocks": 2816, 00:28:04.849 "percent": 35 00:28:04.849 } 00:28:04.849 }, 00:28:04.849 "base_bdevs_list": [ 00:28:04.849 { 00:28:04.849 "name": "spare", 00:28:04.849 "uuid": "24668838-9f33-5ef3-82fc-9e0a95a507e9", 00:28:04.849 "is_configured": true, 00:28:04.849 "data_offset": 256, 00:28:04.849 "data_size": 7936 00:28:04.849 }, 00:28:04.849 { 00:28:04.849 "name": "BaseBdev2", 00:28:04.849 "uuid": "d2e012e0-aed8-5be4-8fa9-49cf2722b559", 00:28:04.849 "is_configured": true, 00:28:04.849 "data_offset": 256, 00:28:04.849 "data_size": 7936 00:28:04.849 } 00:28:04.849 ] 00:28:04.849 }' 00:28:04.849 12:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:04.849 12:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:04.849 12:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:04.849 12:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:04.849 12:59:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:05.785 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:05.785 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:05.785 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:05.785 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:05.785 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:05.785 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:05.785 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:05.785 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:05.785 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.785 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:05.785 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.785 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:05.785 "name": "raid_bdev1", 00:28:05.785 "uuid": "4fec660c-e444-4342-b062-87678bd91da3", 00:28:05.785 "strip_size_kb": 0, 00:28:05.785 "state": "online", 00:28:05.785 "raid_level": "raid1", 00:28:05.785 "superblock": true, 00:28:05.785 "num_base_bdevs": 2, 00:28:05.785 "num_base_bdevs_discovered": 2, 00:28:05.785 "num_base_bdevs_operational": 2, 00:28:05.785 "process": { 00:28:05.785 "type": "rebuild", 00:28:05.785 "target": "spare", 00:28:05.785 "progress": { 00:28:05.785 "blocks": 5632, 00:28:05.785 "percent": 70 00:28:05.785 } 00:28:05.785 }, 00:28:05.785 "base_bdevs_list": [ 00:28:05.785 { 00:28:05.785 "name": "spare", 00:28:05.785 "uuid": "24668838-9f33-5ef3-82fc-9e0a95a507e9", 00:28:05.785 "is_configured": true, 00:28:05.785 "data_offset": 256, 00:28:05.785 "data_size": 7936 00:28:05.785 }, 00:28:05.785 { 00:28:05.785 "name": "BaseBdev2", 00:28:05.785 "uuid": "d2e012e0-aed8-5be4-8fa9-49cf2722b559", 00:28:05.785 "is_configured": true, 00:28:05.785 "data_offset": 256, 00:28:05.785 "data_size": 7936 00:28:05.785 } 00:28:05.785 ] 00:28:05.785 }' 00:28:05.785 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:05.785 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:05.785 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:05.785 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:05.785 12:59:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:06.719 [2024-12-05 12:59:49.144442] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:06.719 [2024-12-05 12:59:49.144524] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:06.719 [2024-12-05 12:59:49.144612] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:06.977 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:06.977 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:06.977 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:06.977 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:06.977 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:06.977 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:06.977 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:06.977 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:06.977 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.977 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:06.977 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.977 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:06.977 "name": "raid_bdev1", 00:28:06.977 "uuid": "4fec660c-e444-4342-b062-87678bd91da3", 00:28:06.977 "strip_size_kb": 0, 00:28:06.977 "state": "online", 00:28:06.977 "raid_level": "raid1", 00:28:06.977 "superblock": true, 00:28:06.977 "num_base_bdevs": 2, 00:28:06.977 "num_base_bdevs_discovered": 2, 00:28:06.977 "num_base_bdevs_operational": 2, 00:28:06.977 "base_bdevs_list": [ 00:28:06.977 { 00:28:06.977 "name": "spare", 00:28:06.977 "uuid": "24668838-9f33-5ef3-82fc-9e0a95a507e9", 00:28:06.977 "is_configured": true, 00:28:06.977 "data_offset": 256, 00:28:06.977 "data_size": 7936 00:28:06.977 }, 00:28:06.977 { 00:28:06.977 "name": "BaseBdev2", 00:28:06.977 "uuid": "d2e012e0-aed8-5be4-8fa9-49cf2722b559", 00:28:06.977 "is_configured": true, 00:28:06.977 "data_offset": 256, 00:28:06.977 "data_size": 7936 00:28:06.977 } 00:28:06.977 ] 00:28:06.977 }' 00:28:06.977 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:06.977 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:06.977 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:06.977 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:28:06.977 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:28:06.977 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:06.977 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:06.977 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:06.977 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:06.977 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:06.977 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:06.977 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:06.977 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.978 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:06.978 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.978 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:06.978 "name": "raid_bdev1", 00:28:06.978 "uuid": "4fec660c-e444-4342-b062-87678bd91da3", 00:28:06.978 "strip_size_kb": 0, 00:28:06.978 "state": "online", 00:28:06.978 "raid_level": "raid1", 00:28:06.978 "superblock": true, 00:28:06.978 "num_base_bdevs": 2, 00:28:06.978 "num_base_bdevs_discovered": 2, 00:28:06.978 "num_base_bdevs_operational": 2, 00:28:06.978 "base_bdevs_list": [ 00:28:06.978 { 00:28:06.978 "name": "spare", 00:28:06.978 "uuid": "24668838-9f33-5ef3-82fc-9e0a95a507e9", 00:28:06.978 "is_configured": true, 00:28:06.978 "data_offset": 256, 00:28:06.978 "data_size": 7936 00:28:06.978 }, 00:28:06.978 { 00:28:06.978 "name": "BaseBdev2", 00:28:06.978 "uuid": "d2e012e0-aed8-5be4-8fa9-49cf2722b559", 00:28:06.978 "is_configured": true, 00:28:06.978 "data_offset": 256, 00:28:06.978 "data_size": 7936 00:28:06.978 } 00:28:06.978 ] 00:28:06.978 }' 00:28:06.978 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:06.978 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:06.978 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:06.978 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:06.978 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:06.978 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:06.978 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:06.978 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:06.978 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:06.978 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:06.978 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:06.978 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:06.978 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:06.978 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:06.978 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:06.978 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:06.978 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.978 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:07.236 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.236 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:07.237 "name": "raid_bdev1", 00:28:07.237 "uuid": "4fec660c-e444-4342-b062-87678bd91da3", 00:28:07.237 "strip_size_kb": 0, 00:28:07.237 "state": "online", 00:28:07.237 "raid_level": "raid1", 00:28:07.237 "superblock": true, 00:28:07.237 "num_base_bdevs": 2, 00:28:07.237 "num_base_bdevs_discovered": 2, 00:28:07.237 "num_base_bdevs_operational": 2, 00:28:07.237 "base_bdevs_list": [ 00:28:07.237 { 00:28:07.237 "name": "spare", 00:28:07.237 "uuid": "24668838-9f33-5ef3-82fc-9e0a95a507e9", 00:28:07.237 "is_configured": true, 00:28:07.237 "data_offset": 256, 00:28:07.237 "data_size": 7936 00:28:07.237 }, 00:28:07.237 { 00:28:07.237 "name": "BaseBdev2", 00:28:07.237 "uuid": "d2e012e0-aed8-5be4-8fa9-49cf2722b559", 00:28:07.237 "is_configured": true, 00:28:07.237 "data_offset": 256, 00:28:07.237 "data_size": 7936 00:28:07.237 } 00:28:07.237 ] 00:28:07.237 }' 00:28:07.237 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:07.237 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:07.495 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:07.495 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.495 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:07.495 [2024-12-05 12:59:49.879235] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:07.495 [2024-12-05 12:59:49.879260] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:07.495 [2024-12-05 12:59:49.879327] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:07.495 [2024-12-05 12:59:49.879387] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:07.495 [2024-12-05 12:59:49.879398] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:28:07.495 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.495 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:07.495 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:28:07.495 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.495 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:07.495 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.495 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:28:07.495 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:28:07.495 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:28:07.495 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:28:07.495 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.495 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:07.495 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.495 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:28:07.495 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.495 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:07.495 [2024-12-05 12:59:49.927227] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:07.495 [2024-12-05 12:59:49.927271] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:07.495 [2024-12-05 12:59:49.927287] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:28:07.495 [2024-12-05 12:59:49.927295] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:07.495 [2024-12-05 12:59:49.928968] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:07.495 [2024-12-05 12:59:49.928997] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:07.495 [2024-12-05 12:59:49.929041] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:07.495 [2024-12-05 12:59:49.929080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:07.495 [2024-12-05 12:59:49.929164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:07.495 spare 00:28:07.495 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.495 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:28:07.496 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.496 12:59:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:07.496 [2024-12-05 12:59:50.029236] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:28:07.496 [2024-12-05 12:59:50.029269] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:07.496 [2024-12-05 12:59:50.029366] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:28:07.496 [2024-12-05 12:59:50.029448] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:28:07.496 [2024-12-05 12:59:50.029456] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:28:07.496 [2024-12-05 12:59:50.029559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:07.496 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.496 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:07.496 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:07.496 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:07.496 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:07.496 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:07.496 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:07.496 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:07.496 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:07.496 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:07.496 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:07.496 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:07.496 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:07.496 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.496 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:07.496 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.496 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:07.496 "name": "raid_bdev1", 00:28:07.496 "uuid": "4fec660c-e444-4342-b062-87678bd91da3", 00:28:07.496 "strip_size_kb": 0, 00:28:07.496 "state": "online", 00:28:07.496 "raid_level": "raid1", 00:28:07.496 "superblock": true, 00:28:07.496 "num_base_bdevs": 2, 00:28:07.496 "num_base_bdevs_discovered": 2, 00:28:07.496 "num_base_bdevs_operational": 2, 00:28:07.496 "base_bdevs_list": [ 00:28:07.496 { 00:28:07.496 "name": "spare", 00:28:07.496 "uuid": "24668838-9f33-5ef3-82fc-9e0a95a507e9", 00:28:07.496 "is_configured": true, 00:28:07.496 "data_offset": 256, 00:28:07.496 "data_size": 7936 00:28:07.496 }, 00:28:07.496 { 00:28:07.496 "name": "BaseBdev2", 00:28:07.496 "uuid": "d2e012e0-aed8-5be4-8fa9-49cf2722b559", 00:28:07.496 "is_configured": true, 00:28:07.496 "data_offset": 256, 00:28:07.496 "data_size": 7936 00:28:07.496 } 00:28:07.496 ] 00:28:07.496 }' 00:28:07.496 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:07.496 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:07.754 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:07.754 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:07.754 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:07.754 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:07.754 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:07.754 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:07.754 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:07.754 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.754 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:07.754 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.013 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:08.013 "name": "raid_bdev1", 00:28:08.013 "uuid": "4fec660c-e444-4342-b062-87678bd91da3", 00:28:08.013 "strip_size_kb": 0, 00:28:08.013 "state": "online", 00:28:08.013 "raid_level": "raid1", 00:28:08.013 "superblock": true, 00:28:08.013 "num_base_bdevs": 2, 00:28:08.013 "num_base_bdevs_discovered": 2, 00:28:08.013 "num_base_bdevs_operational": 2, 00:28:08.013 "base_bdevs_list": [ 00:28:08.013 { 00:28:08.013 "name": "spare", 00:28:08.013 "uuid": "24668838-9f33-5ef3-82fc-9e0a95a507e9", 00:28:08.013 "is_configured": true, 00:28:08.013 "data_offset": 256, 00:28:08.013 "data_size": 7936 00:28:08.013 }, 00:28:08.013 { 00:28:08.013 "name": "BaseBdev2", 00:28:08.013 "uuid": "d2e012e0-aed8-5be4-8fa9-49cf2722b559", 00:28:08.013 "is_configured": true, 00:28:08.013 "data_offset": 256, 00:28:08.013 "data_size": 7936 00:28:08.013 } 00:28:08.013 ] 00:28:08.013 }' 00:28:08.013 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:08.013 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:08.013 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:08.013 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:08.013 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:28:08.013 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:08.013 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.013 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:08.013 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.013 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:28:08.013 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:28:08.013 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.013 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:08.013 [2024-12-05 12:59:50.459405] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:08.013 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.013 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:08.013 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:08.013 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:08.013 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:08.013 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:08.013 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:08.013 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:08.013 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:08.013 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:08.013 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:08.013 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:08.013 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:08.013 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.013 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:08.013 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.013 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:08.013 "name": "raid_bdev1", 00:28:08.013 "uuid": "4fec660c-e444-4342-b062-87678bd91da3", 00:28:08.013 "strip_size_kb": 0, 00:28:08.013 "state": "online", 00:28:08.013 "raid_level": "raid1", 00:28:08.013 "superblock": true, 00:28:08.013 "num_base_bdevs": 2, 00:28:08.013 "num_base_bdevs_discovered": 1, 00:28:08.013 "num_base_bdevs_operational": 1, 00:28:08.013 "base_bdevs_list": [ 00:28:08.013 { 00:28:08.013 "name": null, 00:28:08.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:08.013 "is_configured": false, 00:28:08.013 "data_offset": 0, 00:28:08.013 "data_size": 7936 00:28:08.013 }, 00:28:08.013 { 00:28:08.013 "name": "BaseBdev2", 00:28:08.013 "uuid": "d2e012e0-aed8-5be4-8fa9-49cf2722b559", 00:28:08.013 "is_configured": true, 00:28:08.013 "data_offset": 256, 00:28:08.013 "data_size": 7936 00:28:08.013 } 00:28:08.013 ] 00:28:08.013 }' 00:28:08.013 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:08.013 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:08.271 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:08.271 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.271 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:08.272 [2024-12-05 12:59:50.759469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:08.272 [2024-12-05 12:59:50.759650] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:08.272 [2024-12-05 12:59:50.759664] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:08.272 [2024-12-05 12:59:50.759697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:08.272 [2024-12-05 12:59:50.768648] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:28:08.272 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.272 12:59:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:28:08.272 [2024-12-05 12:59:50.770240] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:09.205 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:09.205 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:09.205 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:09.205 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:09.205 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:09.205 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:09.205 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:09.205 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.205 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:09.205 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.463 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:09.463 "name": "raid_bdev1", 00:28:09.463 "uuid": "4fec660c-e444-4342-b062-87678bd91da3", 00:28:09.463 "strip_size_kb": 0, 00:28:09.463 "state": "online", 00:28:09.463 "raid_level": "raid1", 00:28:09.463 "superblock": true, 00:28:09.463 "num_base_bdevs": 2, 00:28:09.463 "num_base_bdevs_discovered": 2, 00:28:09.463 "num_base_bdevs_operational": 2, 00:28:09.463 "process": { 00:28:09.463 "type": "rebuild", 00:28:09.463 "target": "spare", 00:28:09.463 "progress": { 00:28:09.463 "blocks": 2560, 00:28:09.463 "percent": 32 00:28:09.463 } 00:28:09.463 }, 00:28:09.463 "base_bdevs_list": [ 00:28:09.463 { 00:28:09.463 "name": "spare", 00:28:09.463 "uuid": "24668838-9f33-5ef3-82fc-9e0a95a507e9", 00:28:09.463 "is_configured": true, 00:28:09.463 "data_offset": 256, 00:28:09.463 "data_size": 7936 00:28:09.463 }, 00:28:09.463 { 00:28:09.463 "name": "BaseBdev2", 00:28:09.463 "uuid": "d2e012e0-aed8-5be4-8fa9-49cf2722b559", 00:28:09.463 "is_configured": true, 00:28:09.463 "data_offset": 256, 00:28:09.463 "data_size": 7936 00:28:09.463 } 00:28:09.463 ] 00:28:09.463 }' 00:28:09.463 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:09.463 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:09.463 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:09.463 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:09.463 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:28:09.463 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.463 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:09.463 [2024-12-05 12:59:51.884555] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:09.463 [2024-12-05 12:59:51.975922] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:09.463 [2024-12-05 12:59:51.975997] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:09.463 [2024-12-05 12:59:51.976009] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:09.463 [2024-12-05 12:59:51.976017] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:09.463 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.463 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:09.463 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:09.463 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:09.463 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:09.463 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:09.463 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:09.463 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:09.463 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:09.463 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:09.463 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:09.463 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:09.463 12:59:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:09.463 12:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.463 12:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:09.463 12:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.463 12:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:09.463 "name": "raid_bdev1", 00:28:09.463 "uuid": "4fec660c-e444-4342-b062-87678bd91da3", 00:28:09.463 "strip_size_kb": 0, 00:28:09.463 "state": "online", 00:28:09.463 "raid_level": "raid1", 00:28:09.463 "superblock": true, 00:28:09.463 "num_base_bdevs": 2, 00:28:09.463 "num_base_bdevs_discovered": 1, 00:28:09.463 "num_base_bdevs_operational": 1, 00:28:09.463 "base_bdevs_list": [ 00:28:09.463 { 00:28:09.463 "name": null, 00:28:09.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:09.463 "is_configured": false, 00:28:09.463 "data_offset": 0, 00:28:09.463 "data_size": 7936 00:28:09.463 }, 00:28:09.463 { 00:28:09.463 "name": "BaseBdev2", 00:28:09.463 "uuid": "d2e012e0-aed8-5be4-8fa9-49cf2722b559", 00:28:09.463 "is_configured": true, 00:28:09.463 "data_offset": 256, 00:28:09.463 "data_size": 7936 00:28:09.463 } 00:28:09.463 ] 00:28:09.463 }' 00:28:09.463 12:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:09.463 12:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:10.030 12:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:28:10.030 12:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.030 12:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:10.030 [2024-12-05 12:59:52.318360] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:10.030 [2024-12-05 12:59:52.318432] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:10.030 [2024-12-05 12:59:52.318457] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:28:10.030 [2024-12-05 12:59:52.318468] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:10.030 [2024-12-05 12:59:52.318671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:10.030 [2024-12-05 12:59:52.318687] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:10.030 [2024-12-05 12:59:52.318742] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:10.030 [2024-12-05 12:59:52.318756] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:10.030 [2024-12-05 12:59:52.318766] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:10.030 [2024-12-05 12:59:52.318786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:10.030 [2024-12-05 12:59:52.329535] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:28:10.030 spare 00:28:10.030 12:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.030 12:59:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:28:10.030 [2024-12-05 12:59:52.331401] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:10.964 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:10.964 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:10.964 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:10.964 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:10.964 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:10.964 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:10.964 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:10.964 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.964 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:10.964 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.964 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:10.964 "name": "raid_bdev1", 00:28:10.964 "uuid": "4fec660c-e444-4342-b062-87678bd91da3", 00:28:10.964 "strip_size_kb": 0, 00:28:10.964 "state": "online", 00:28:10.964 "raid_level": "raid1", 00:28:10.964 "superblock": true, 00:28:10.964 "num_base_bdevs": 2, 00:28:10.964 "num_base_bdevs_discovered": 2, 00:28:10.964 "num_base_bdevs_operational": 2, 00:28:10.964 "process": { 00:28:10.964 "type": "rebuild", 00:28:10.964 "target": "spare", 00:28:10.964 "progress": { 00:28:10.964 "blocks": 2560, 00:28:10.964 "percent": 32 00:28:10.964 } 00:28:10.964 }, 00:28:10.964 "base_bdevs_list": [ 00:28:10.964 { 00:28:10.964 "name": "spare", 00:28:10.964 "uuid": "24668838-9f33-5ef3-82fc-9e0a95a507e9", 00:28:10.964 "is_configured": true, 00:28:10.964 "data_offset": 256, 00:28:10.964 "data_size": 7936 00:28:10.964 }, 00:28:10.964 { 00:28:10.964 "name": "BaseBdev2", 00:28:10.964 "uuid": "d2e012e0-aed8-5be4-8fa9-49cf2722b559", 00:28:10.964 "is_configured": true, 00:28:10.964 "data_offset": 256, 00:28:10.964 "data_size": 7936 00:28:10.964 } 00:28:10.964 ] 00:28:10.964 }' 00:28:10.964 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:10.964 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:10.964 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:10.964 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:10.964 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:28:10.964 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.964 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:10.964 [2024-12-05 12:59:53.429593] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:10.964 [2024-12-05 12:59:53.436846] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:10.964 [2024-12-05 12:59:53.437035] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:10.964 [2024-12-05 12:59:53.437057] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:10.964 [2024-12-05 12:59:53.437066] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:10.964 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.964 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:10.964 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:10.964 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:10.964 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:10.964 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:10.964 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:10.964 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:10.964 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:10.964 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:10.964 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:10.964 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:10.964 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.964 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:10.964 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:10.964 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.964 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:10.964 "name": "raid_bdev1", 00:28:10.964 "uuid": "4fec660c-e444-4342-b062-87678bd91da3", 00:28:10.964 "strip_size_kb": 0, 00:28:10.964 "state": "online", 00:28:10.964 "raid_level": "raid1", 00:28:10.964 "superblock": true, 00:28:10.964 "num_base_bdevs": 2, 00:28:10.964 "num_base_bdevs_discovered": 1, 00:28:10.964 "num_base_bdevs_operational": 1, 00:28:10.964 "base_bdevs_list": [ 00:28:10.964 { 00:28:10.964 "name": null, 00:28:10.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:10.964 "is_configured": false, 00:28:10.964 "data_offset": 0, 00:28:10.964 "data_size": 7936 00:28:10.964 }, 00:28:10.964 { 00:28:10.964 "name": "BaseBdev2", 00:28:10.964 "uuid": "d2e012e0-aed8-5be4-8fa9-49cf2722b559", 00:28:10.964 "is_configured": true, 00:28:10.964 "data_offset": 256, 00:28:10.964 "data_size": 7936 00:28:10.964 } 00:28:10.964 ] 00:28:10.964 }' 00:28:10.964 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:10.964 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:11.222 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:11.222 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:11.222 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:11.222 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:11.222 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:11.222 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:11.222 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:11.222 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.222 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:11.222 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.222 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:11.222 "name": "raid_bdev1", 00:28:11.222 "uuid": "4fec660c-e444-4342-b062-87678bd91da3", 00:28:11.222 "strip_size_kb": 0, 00:28:11.222 "state": "online", 00:28:11.222 "raid_level": "raid1", 00:28:11.222 "superblock": true, 00:28:11.222 "num_base_bdevs": 2, 00:28:11.222 "num_base_bdevs_discovered": 1, 00:28:11.222 "num_base_bdevs_operational": 1, 00:28:11.222 "base_bdevs_list": [ 00:28:11.222 { 00:28:11.222 "name": null, 00:28:11.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:11.222 "is_configured": false, 00:28:11.222 "data_offset": 0, 00:28:11.222 "data_size": 7936 00:28:11.222 }, 00:28:11.222 { 00:28:11.222 "name": "BaseBdev2", 00:28:11.222 "uuid": "d2e012e0-aed8-5be4-8fa9-49cf2722b559", 00:28:11.222 "is_configured": true, 00:28:11.222 "data_offset": 256, 00:28:11.222 "data_size": 7936 00:28:11.222 } 00:28:11.222 ] 00:28:11.222 }' 00:28:11.222 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:11.548 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:11.548 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:11.548 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:11.548 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:28:11.548 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.548 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:11.548 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.548 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:11.548 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.548 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:11.548 [2024-12-05 12:59:53.872191] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:11.548 [2024-12-05 12:59:53.872369] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:11.548 [2024-12-05 12:59:53.872395] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:28:11.548 [2024-12-05 12:59:53.872405] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:11.548 [2024-12-05 12:59:53.872582] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:11.548 [2024-12-05 12:59:53.872594] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:11.548 [2024-12-05 12:59:53.872642] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:28:11.548 [2024-12-05 12:59:53.872656] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:11.548 [2024-12-05 12:59:53.872664] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:11.548 [2024-12-05 12:59:53.872673] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:28:11.548 BaseBdev1 00:28:11.548 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.548 12:59:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:28:12.485 12:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:12.485 12:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:12.485 12:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:12.485 12:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:12.485 12:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:12.485 12:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:12.485 12:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:12.485 12:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:12.485 12:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:12.485 12:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:12.485 12:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:12.485 12:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.485 12:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:12.485 12:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:12.485 12:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.486 12:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:12.486 "name": "raid_bdev1", 00:28:12.486 "uuid": "4fec660c-e444-4342-b062-87678bd91da3", 00:28:12.486 "strip_size_kb": 0, 00:28:12.486 "state": "online", 00:28:12.486 "raid_level": "raid1", 00:28:12.486 "superblock": true, 00:28:12.486 "num_base_bdevs": 2, 00:28:12.486 "num_base_bdevs_discovered": 1, 00:28:12.486 "num_base_bdevs_operational": 1, 00:28:12.486 "base_bdevs_list": [ 00:28:12.486 { 00:28:12.486 "name": null, 00:28:12.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:12.486 "is_configured": false, 00:28:12.486 "data_offset": 0, 00:28:12.486 "data_size": 7936 00:28:12.486 }, 00:28:12.486 { 00:28:12.486 "name": "BaseBdev2", 00:28:12.486 "uuid": "d2e012e0-aed8-5be4-8fa9-49cf2722b559", 00:28:12.486 "is_configured": true, 00:28:12.486 "data_offset": 256, 00:28:12.486 "data_size": 7936 00:28:12.486 } 00:28:12.486 ] 00:28:12.486 }' 00:28:12.486 12:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:12.486 12:59:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:12.743 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:12.743 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:12.743 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:12.743 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:12.743 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:12.743 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:12.743 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.743 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:12.743 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:12.743 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.743 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:12.743 "name": "raid_bdev1", 00:28:12.743 "uuid": "4fec660c-e444-4342-b062-87678bd91da3", 00:28:12.743 "strip_size_kb": 0, 00:28:12.743 "state": "online", 00:28:12.743 "raid_level": "raid1", 00:28:12.743 "superblock": true, 00:28:12.743 "num_base_bdevs": 2, 00:28:12.743 "num_base_bdevs_discovered": 1, 00:28:12.743 "num_base_bdevs_operational": 1, 00:28:12.743 "base_bdevs_list": [ 00:28:12.743 { 00:28:12.743 "name": null, 00:28:12.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:12.743 "is_configured": false, 00:28:12.743 "data_offset": 0, 00:28:12.743 "data_size": 7936 00:28:12.743 }, 00:28:12.743 { 00:28:12.743 "name": "BaseBdev2", 00:28:12.743 "uuid": "d2e012e0-aed8-5be4-8fa9-49cf2722b559", 00:28:12.743 "is_configured": true, 00:28:12.744 "data_offset": 256, 00:28:12.744 "data_size": 7936 00:28:12.744 } 00:28:12.744 ] 00:28:12.744 }' 00:28:12.744 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:12.744 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:12.744 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:12.744 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:12.744 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:12.744 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:28:12.744 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:12.744 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:12.744 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:12.744 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:12.744 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:12.744 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:12.744 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.744 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:12.744 [2024-12-05 12:59:55.284595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:12.744 [2024-12-05 12:59:55.284744] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:12.744 [2024-12-05 12:59:55.284763] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:12.744 request: 00:28:12.744 { 00:28:12.744 "base_bdev": "BaseBdev1", 00:28:12.744 "raid_bdev": "raid_bdev1", 00:28:12.744 "method": "bdev_raid_add_base_bdev", 00:28:12.744 "req_id": 1 00:28:12.744 } 00:28:12.744 Got JSON-RPC error response 00:28:12.744 response: 00:28:12.744 { 00:28:12.744 "code": -22, 00:28:12.744 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:28:12.744 } 00:28:12.744 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:12.744 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:28:12.744 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:12.744 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:12.744 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:12.744 12:59:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:28:14.116 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:14.116 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:14.116 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:14.116 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:14.116 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:14.116 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:14.116 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:14.116 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:14.116 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:14.116 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:14.116 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:14.116 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:14.116 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.116 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:14.116 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.116 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:14.116 "name": "raid_bdev1", 00:28:14.117 "uuid": "4fec660c-e444-4342-b062-87678bd91da3", 00:28:14.117 "strip_size_kb": 0, 00:28:14.117 "state": "online", 00:28:14.117 "raid_level": "raid1", 00:28:14.117 "superblock": true, 00:28:14.117 "num_base_bdevs": 2, 00:28:14.117 "num_base_bdevs_discovered": 1, 00:28:14.117 "num_base_bdevs_operational": 1, 00:28:14.117 "base_bdevs_list": [ 00:28:14.117 { 00:28:14.117 "name": null, 00:28:14.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:14.117 "is_configured": false, 00:28:14.117 "data_offset": 0, 00:28:14.117 "data_size": 7936 00:28:14.117 }, 00:28:14.117 { 00:28:14.117 "name": "BaseBdev2", 00:28:14.117 "uuid": "d2e012e0-aed8-5be4-8fa9-49cf2722b559", 00:28:14.117 "is_configured": true, 00:28:14.117 "data_offset": 256, 00:28:14.117 "data_size": 7936 00:28:14.117 } 00:28:14.117 ] 00:28:14.117 }' 00:28:14.117 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:14.117 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:14.117 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:14.117 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:14.117 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:14.117 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:14.117 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:14.117 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:14.117 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:14.117 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.117 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:14.117 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.117 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:14.117 "name": "raid_bdev1", 00:28:14.117 "uuid": "4fec660c-e444-4342-b062-87678bd91da3", 00:28:14.117 "strip_size_kb": 0, 00:28:14.117 "state": "online", 00:28:14.117 "raid_level": "raid1", 00:28:14.117 "superblock": true, 00:28:14.117 "num_base_bdevs": 2, 00:28:14.117 "num_base_bdevs_discovered": 1, 00:28:14.117 "num_base_bdevs_operational": 1, 00:28:14.117 "base_bdevs_list": [ 00:28:14.117 { 00:28:14.117 "name": null, 00:28:14.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:14.117 "is_configured": false, 00:28:14.117 "data_offset": 0, 00:28:14.117 "data_size": 7936 00:28:14.117 }, 00:28:14.117 { 00:28:14.117 "name": "BaseBdev2", 00:28:14.117 "uuid": "d2e012e0-aed8-5be4-8fa9-49cf2722b559", 00:28:14.117 "is_configured": true, 00:28:14.117 "data_offset": 256, 00:28:14.117 "data_size": 7936 00:28:14.117 } 00:28:14.117 ] 00:28:14.117 }' 00:28:14.117 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:14.117 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:14.117 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:14.117 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:14.117 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 86199 00:28:14.117 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 86199 ']' 00:28:14.117 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 86199 00:28:14.117 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:28:14.117 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:14.117 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86199 00:28:14.374 killing process with pid 86199 00:28:14.374 Received shutdown signal, test time was about 60.000000 seconds 00:28:14.374 00:28:14.374 Latency(us) 00:28:14.374 [2024-12-05T12:59:56.961Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:14.374 [2024-12-05T12:59:56.961Z] =================================================================================================================== 00:28:14.374 [2024-12-05T12:59:56.961Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:14.374 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:14.374 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:14.374 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86199' 00:28:14.374 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 86199 00:28:14.374 [2024-12-05 12:59:56.703197] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:14.374 12:59:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 86199 00:28:14.374 [2024-12-05 12:59:56.703312] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:14.374 [2024-12-05 12:59:56.703358] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:14.374 [2024-12-05 12:59:56.703370] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:28:14.374 [2024-12-05 12:59:56.867411] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:14.977 ************************************ 00:28:14.977 END TEST raid_rebuild_test_sb_md_interleaved 00:28:14.977 ************************************ 00:28:14.977 12:59:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:28:14.977 00:28:14.977 real 0m14.921s 00:28:14.977 user 0m18.983s 00:28:14.977 sys 0m1.073s 00:28:14.977 12:59:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:14.977 12:59:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:14.977 12:59:57 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:28:14.977 12:59:57 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:28:14.977 12:59:57 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 86199 ']' 00:28:14.977 12:59:57 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 86199 00:28:14.977 12:59:57 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:28:14.977 00:28:14.977 real 9m20.067s 00:28:14.977 user 12m29.948s 00:28:14.977 sys 1m15.889s 00:28:14.977 12:59:57 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:14.977 ************************************ 00:28:14.977 END TEST bdev_raid 00:28:14.977 ************************************ 00:28:14.977 12:59:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:14.977 12:59:57 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:28:14.977 12:59:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:14.977 12:59:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:14.977 12:59:57 -- common/autotest_common.sh@10 -- # set +x 00:28:14.977 ************************************ 00:28:14.977 START TEST spdkcli_raid 00:28:14.977 ************************************ 00:28:14.977 12:59:57 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:28:15.234 * Looking for test storage... 00:28:15.234 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:28:15.234 12:59:57 spdkcli_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:15.234 12:59:57 spdkcli_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:28:15.234 12:59:57 spdkcli_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:15.234 12:59:57 spdkcli_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:15.234 12:59:57 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:15.234 12:59:57 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:15.234 12:59:57 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:15.234 12:59:57 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:28:15.234 12:59:57 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:28:15.234 12:59:57 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:28:15.234 12:59:57 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:28:15.234 12:59:57 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:28:15.234 12:59:57 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:28:15.234 12:59:57 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:28:15.234 12:59:57 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:15.234 12:59:57 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:28:15.234 12:59:57 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:28:15.234 12:59:57 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:15.234 12:59:57 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:15.234 12:59:57 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:28:15.234 12:59:57 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:28:15.234 12:59:57 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:15.234 12:59:57 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:28:15.234 12:59:57 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:28:15.234 12:59:57 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:28:15.234 12:59:57 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:28:15.234 12:59:57 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:15.234 12:59:57 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:28:15.234 12:59:57 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:28:15.234 12:59:57 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:15.234 12:59:57 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:15.234 12:59:57 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:28:15.234 12:59:57 spdkcli_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:15.234 12:59:57 spdkcli_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:15.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.234 --rc genhtml_branch_coverage=1 00:28:15.234 --rc genhtml_function_coverage=1 00:28:15.234 --rc genhtml_legend=1 00:28:15.234 --rc geninfo_all_blocks=1 00:28:15.234 --rc geninfo_unexecuted_blocks=1 00:28:15.234 00:28:15.234 ' 00:28:15.234 12:59:57 spdkcli_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:15.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.234 --rc genhtml_branch_coverage=1 00:28:15.234 --rc genhtml_function_coverage=1 00:28:15.234 --rc genhtml_legend=1 00:28:15.234 --rc geninfo_all_blocks=1 00:28:15.234 --rc geninfo_unexecuted_blocks=1 00:28:15.234 00:28:15.234 ' 00:28:15.234 12:59:57 spdkcli_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:15.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.234 --rc genhtml_branch_coverage=1 00:28:15.234 --rc genhtml_function_coverage=1 00:28:15.234 --rc genhtml_legend=1 00:28:15.234 --rc geninfo_all_blocks=1 00:28:15.234 --rc geninfo_unexecuted_blocks=1 00:28:15.234 00:28:15.234 ' 00:28:15.234 12:59:57 spdkcli_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:15.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.234 --rc genhtml_branch_coverage=1 00:28:15.234 --rc genhtml_function_coverage=1 00:28:15.234 --rc genhtml_legend=1 00:28:15.234 --rc geninfo_all_blocks=1 00:28:15.234 --rc geninfo_unexecuted_blocks=1 00:28:15.234 00:28:15.234 ' 00:28:15.234 12:59:57 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:28:15.234 12:59:57 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:28:15.234 12:59:57 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:28:15.234 12:59:57 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:28:15.234 12:59:57 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:28:15.234 12:59:57 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:28:15.234 12:59:57 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:28:15.234 12:59:57 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:28:15.234 12:59:57 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:28:15.234 12:59:57 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:28:15.234 12:59:57 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:28:15.234 12:59:57 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:28:15.234 12:59:57 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:28:15.234 12:59:57 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:28:15.234 12:59:57 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:28:15.234 12:59:57 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:28:15.234 12:59:57 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:28:15.234 12:59:57 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:28:15.234 12:59:57 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:28:15.234 12:59:57 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:28:15.234 12:59:57 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:28:15.234 12:59:57 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:28:15.234 12:59:57 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:28:15.234 12:59:57 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:28:15.234 12:59:57 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:28:15.234 12:59:57 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:28:15.234 12:59:57 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:28:15.234 12:59:57 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:28:15.234 12:59:57 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:28:15.234 12:59:57 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:28:15.234 12:59:57 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:28:15.234 12:59:57 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:28:15.234 12:59:57 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:28:15.234 12:59:57 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:15.234 12:59:57 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:15.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:15.234 12:59:57 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:28:15.234 12:59:57 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=86852 00:28:15.234 12:59:57 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 86852 00:28:15.234 12:59:57 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 86852 ']' 00:28:15.234 12:59:57 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:15.234 12:59:57 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:15.234 12:59:57 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:15.234 12:59:57 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:15.234 12:59:57 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:15.234 12:59:57 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:28:15.234 [2024-12-05 12:59:57.727696] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:28:15.234 [2024-12-05 12:59:57.727795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86852 ] 00:28:15.491 [2024-12-05 12:59:57.878979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:15.491 [2024-12-05 12:59:57.982904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:15.491 [2024-12-05 12:59:57.982923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.055 12:59:58 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:16.055 12:59:58 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:28:16.055 12:59:58 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:28:16.055 12:59:58 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:16.055 12:59:58 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:16.055 12:59:58 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:28:16.055 12:59:58 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:16.055 12:59:58 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:16.055 12:59:58 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:28:16.055 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:28:16.055 ' 00:28:17.966 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:28:17.966 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:28:17.966 13:00:00 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:28:17.966 13:00:00 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:17.966 13:00:00 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:17.966 13:00:00 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:28:17.966 13:00:00 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:17.966 13:00:00 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:17.966 13:00:00 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:28:17.966 ' 00:28:18.931 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:28:18.931 13:00:01 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:28:18.931 13:00:01 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:18.931 13:00:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:18.931 13:00:01 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:28:18.931 13:00:01 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:18.931 13:00:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:18.931 13:00:01 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:28:18.931 13:00:01 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:28:19.499 13:00:01 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:28:19.499 13:00:01 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:28:19.499 13:00:01 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:28:19.499 13:00:01 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:19.499 13:00:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:19.499 13:00:01 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:28:19.499 13:00:01 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:19.499 13:00:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:19.499 13:00:01 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:28:19.499 ' 00:28:20.433 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:28:20.433 13:00:02 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:28:20.433 13:00:02 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:20.433 13:00:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:20.433 13:00:02 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:28:20.433 13:00:02 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:20.433 13:00:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:20.433 13:00:02 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:28:20.433 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:28:20.433 ' 00:28:21.813 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:28:21.813 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:28:21.813 13:00:04 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:28:21.813 13:00:04 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:21.813 13:00:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:22.074 13:00:04 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 86852 00:28:22.074 13:00:04 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 86852 ']' 00:28:22.074 13:00:04 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 86852 00:28:22.074 13:00:04 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:28:22.074 13:00:04 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:22.075 13:00:04 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86852 00:28:22.075 13:00:04 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:22.075 killing process with pid 86852 00:28:22.075 13:00:04 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:22.075 13:00:04 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86852' 00:28:22.075 13:00:04 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 86852 00:28:22.075 13:00:04 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 86852 00:28:23.456 Process with pid 86852 is not found 00:28:23.456 13:00:05 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:28:23.456 13:00:05 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 86852 ']' 00:28:23.456 13:00:05 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 86852 00:28:23.456 13:00:05 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 86852 ']' 00:28:23.456 13:00:05 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 86852 00:28:23.456 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (86852) - No such process 00:28:23.456 13:00:05 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 86852 is not found' 00:28:23.456 13:00:05 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:28:23.456 13:00:05 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:28:23.456 13:00:05 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:28:23.456 13:00:05 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:28:23.456 ************************************ 00:28:23.457 END TEST spdkcli_raid 00:28:23.457 ************************************ 00:28:23.457 00:28:23.457 real 0m8.120s 00:28:23.457 user 0m16.940s 00:28:23.457 sys 0m0.735s 00:28:23.457 13:00:05 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:23.457 13:00:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:28:23.457 13:00:05 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:28:23.457 13:00:05 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:23.457 13:00:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:23.457 13:00:05 -- common/autotest_common.sh@10 -- # set +x 00:28:23.457 ************************************ 00:28:23.457 START TEST blockdev_raid5f 00:28:23.457 ************************************ 00:28:23.457 13:00:05 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:28:23.457 * Looking for test storage... 00:28:23.457 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:28:23.457 13:00:05 blockdev_raid5f -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:23.457 13:00:05 blockdev_raid5f -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:23.457 13:00:05 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lcov --version 00:28:23.457 13:00:05 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:23.457 13:00:05 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:23.457 13:00:05 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:23.457 13:00:05 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:23.457 13:00:05 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:28:23.457 13:00:05 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:28:23.457 13:00:05 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:28:23.457 13:00:05 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:28:23.457 13:00:05 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:28:23.457 13:00:05 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:28:23.457 13:00:05 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:28:23.457 13:00:05 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:23.457 13:00:05 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:28:23.457 13:00:05 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:28:23.457 13:00:05 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:23.457 13:00:05 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:23.457 13:00:05 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:28:23.457 13:00:05 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:28:23.457 13:00:05 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:23.457 13:00:05 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:28:23.457 13:00:05 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:28:23.457 13:00:05 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:28:23.457 13:00:05 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:28:23.457 13:00:05 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:23.457 13:00:05 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:28:23.457 13:00:05 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:28:23.457 13:00:05 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:23.457 13:00:05 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:23.457 13:00:05 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:28:23.457 13:00:05 blockdev_raid5f -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:23.457 13:00:05 blockdev_raid5f -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:23.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.457 --rc genhtml_branch_coverage=1 00:28:23.457 --rc genhtml_function_coverage=1 00:28:23.457 --rc genhtml_legend=1 00:28:23.457 --rc geninfo_all_blocks=1 00:28:23.457 --rc geninfo_unexecuted_blocks=1 00:28:23.457 00:28:23.457 ' 00:28:23.457 13:00:05 blockdev_raid5f -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:23.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.457 --rc genhtml_branch_coverage=1 00:28:23.457 --rc genhtml_function_coverage=1 00:28:23.457 --rc genhtml_legend=1 00:28:23.457 --rc geninfo_all_blocks=1 00:28:23.457 --rc geninfo_unexecuted_blocks=1 00:28:23.457 00:28:23.457 ' 00:28:23.457 13:00:05 blockdev_raid5f -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:23.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.457 --rc genhtml_branch_coverage=1 00:28:23.457 --rc genhtml_function_coverage=1 00:28:23.457 --rc genhtml_legend=1 00:28:23.457 --rc geninfo_all_blocks=1 00:28:23.457 --rc geninfo_unexecuted_blocks=1 00:28:23.457 00:28:23.457 ' 00:28:23.457 13:00:05 blockdev_raid5f -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:23.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.457 --rc genhtml_branch_coverage=1 00:28:23.457 --rc genhtml_function_coverage=1 00:28:23.457 --rc genhtml_legend=1 00:28:23.457 --rc geninfo_all_blocks=1 00:28:23.457 --rc geninfo_unexecuted_blocks=1 00:28:23.457 00:28:23.457 ' 00:28:23.457 13:00:05 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:28:23.457 13:00:05 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:28:23.457 13:00:05 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:28:23.457 13:00:05 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:23.457 13:00:05 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:28:23.457 13:00:05 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:28:23.457 13:00:05 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:28:23.457 13:00:05 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:28:23.457 13:00:05 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:28:23.457 13:00:05 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:28:23.457 13:00:05 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:28:23.457 13:00:05 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:28:23.457 13:00:05 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:28:23.457 13:00:05 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:28:23.457 13:00:05 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:28:23.457 13:00:05 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:28:23.457 13:00:05 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:28:23.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:23.457 13:00:05 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:28:23.457 13:00:05 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:28:23.457 13:00:05 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:28:23.457 13:00:05 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:28:23.457 13:00:05 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:28:23.457 13:00:05 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:28:23.457 13:00:05 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:28:23.457 13:00:05 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=87112 00:28:23.457 13:00:05 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:28:23.457 13:00:05 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:28:23.457 13:00:05 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 87112 00:28:23.457 13:00:05 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 87112 ']' 00:28:23.457 13:00:05 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:23.457 13:00:05 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:23.457 13:00:05 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:23.457 13:00:05 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:23.457 13:00:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:23.457 [2024-12-05 13:00:05.915864] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:28:23.457 [2024-12-05 13:00:05.916007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87112 ] 00:28:23.758 [2024-12-05 13:00:06.080661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.758 [2024-12-05 13:00:06.183320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.349 13:00:06 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:24.349 13:00:06 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:28:24.349 13:00:06 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:28:24.349 13:00:06 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:28:24.349 13:00:06 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:28:24.349 13:00:06 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.349 13:00:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:24.349 Malloc0 00:28:24.349 Malloc1 00:28:24.349 Malloc2 00:28:24.349 13:00:06 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.349 13:00:06 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:28:24.349 13:00:06 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.349 13:00:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:24.349 13:00:06 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.349 13:00:06 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:28:24.349 13:00:06 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:28:24.349 13:00:06 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.349 13:00:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:24.349 13:00:06 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.349 13:00:06 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:28:24.349 13:00:06 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.349 13:00:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:24.349 13:00:06 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.349 13:00:06 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:28:24.349 13:00:06 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.349 13:00:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:24.349 13:00:06 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.611 13:00:06 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:28:24.611 13:00:06 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:28:24.611 13:00:06 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.611 13:00:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:24.611 13:00:06 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:28:24.612 13:00:06 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.612 13:00:06 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:28:24.612 13:00:06 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:28:24.612 13:00:06 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "8473002e-f422-431d-bcef-35497162c2c4"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "8473002e-f422-431d-bcef-35497162c2c4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "8473002e-f422-431d-bcef-35497162c2c4",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "3d15869a-4868-46a7-b71f-1586b5c29f0c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "92c903af-ec83-4bd2-a7ed-e730b7aebb9e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "0b8974ea-9532-4231-b39c-6a63e0143993",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:28:24.612 13:00:07 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:28:24.612 13:00:07 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:28:24.612 13:00:07 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:28:24.612 13:00:07 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 87112 00:28:24.612 13:00:07 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 87112 ']' 00:28:24.612 13:00:07 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 87112 00:28:24.612 13:00:07 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:28:24.612 13:00:07 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:24.612 13:00:07 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87112 00:28:24.612 killing process with pid 87112 00:28:24.612 13:00:07 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:24.612 13:00:07 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:24.612 13:00:07 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87112' 00:28:24.612 13:00:07 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 87112 00:28:24.612 13:00:07 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 87112 00:28:26.528 13:00:08 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:26.528 13:00:08 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:28:26.528 13:00:08 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:28:26.528 13:00:08 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:26.528 13:00:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:26.528 ************************************ 00:28:26.528 START TEST bdev_hello_world 00:28:26.528 ************************************ 00:28:26.528 13:00:08 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:28:26.528 [2024-12-05 13:00:08.805116] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:28:26.528 [2024-12-05 13:00:08.805426] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87162 ] 00:28:26.528 [2024-12-05 13:00:08.956893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.528 [2024-12-05 13:00:09.060146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:27.095 [2024-12-05 13:00:09.452809] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:28:27.095 [2024-12-05 13:00:09.452865] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:28:27.095 [2024-12-05 13:00:09.452891] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:28:27.095 [2024-12-05 13:00:09.453351] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:28:27.095 [2024-12-05 13:00:09.453485] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:28:27.095 [2024-12-05 13:00:09.453518] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:28:27.095 [2024-12-05 13:00:09.453574] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:28:27.095 00:28:27.095 [2024-12-05 13:00:09.453591] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:28:28.031 00:28:28.031 real 0m1.610s 00:28:28.031 user 0m1.300s 00:28:28.031 sys 0m0.190s 00:28:28.031 13:00:10 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:28.031 13:00:10 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:28:28.031 ************************************ 00:28:28.031 END TEST bdev_hello_world 00:28:28.031 ************************************ 00:28:28.031 13:00:10 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:28:28.031 13:00:10 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:28.031 13:00:10 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:28.031 13:00:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:28.031 ************************************ 00:28:28.032 START TEST bdev_bounds 00:28:28.032 ************************************ 00:28:28.032 13:00:10 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:28:28.032 13:00:10 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=87199 00:28:28.032 13:00:10 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:28:28.032 13:00:10 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:28:28.032 Process bdevio pid: 87199 00:28:28.032 13:00:10 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 87199' 00:28:28.032 13:00:10 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 87199 00:28:28.032 13:00:10 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 87199 ']' 00:28:28.032 13:00:10 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:28.032 13:00:10 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:28.032 13:00:10 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:28.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:28.032 13:00:10 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:28.032 13:00:10 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:28:28.032 [2024-12-05 13:00:10.464580] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:28:28.032 [2024-12-05 13:00:10.464857] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87199 ] 00:28:28.289 [2024-12-05 13:00:10.623342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:28.289 [2024-12-05 13:00:10.729723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:28.289 [2024-12-05 13:00:10.730195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:28.289 [2024-12-05 13:00:10.730469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.853 13:00:11 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:28.853 13:00:11 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:28:28.853 13:00:11 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:28:28.853 I/O targets: 00:28:28.853 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:28:28.853 00:28:28.853 00:28:28.853 CUnit - A unit testing framework for C - Version 2.1-3 00:28:28.853 http://cunit.sourceforge.net/ 00:28:28.853 00:28:28.853 00:28:28.853 Suite: bdevio tests on: raid5f 00:28:28.853 Test: blockdev write read block ...passed 00:28:28.853 Test: blockdev write zeroes read block ...passed 00:28:28.853 Test: blockdev write zeroes read no split ...passed 00:28:29.111 Test: blockdev write zeroes read split ...passed 00:28:29.111 Test: blockdev write zeroes read split partial ...passed 00:28:29.111 Test: blockdev reset ...passed 00:28:29.111 Test: blockdev write read 8 blocks ...passed 00:28:29.111 Test: blockdev write read size > 128k ...passed 00:28:29.111 Test: blockdev write read invalid size ...passed 00:28:29.111 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:29.111 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:29.111 Test: blockdev write read max offset ...passed 00:28:29.111 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:29.111 Test: blockdev writev readv 8 blocks ...passed 00:28:29.111 Test: blockdev writev readv 30 x 1block ...passed 00:28:29.111 Test: blockdev writev readv block ...passed 00:28:29.111 Test: blockdev writev readv size > 128k ...passed 00:28:29.111 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:29.111 Test: blockdev comparev and writev ...passed 00:28:29.111 Test: blockdev nvme passthru rw ...passed 00:28:29.111 Test: blockdev nvme passthru vendor specific ...passed 00:28:29.111 Test: blockdev nvme admin passthru ...passed 00:28:29.111 Test: blockdev copy ...passed 00:28:29.111 00:28:29.111 Run Summary: Type Total Ran Passed Failed Inactive 00:28:29.111 suites 1 1 n/a 0 0 00:28:29.111 tests 23 23 23 0 0 00:28:29.111 asserts 130 130 130 0 n/a 00:28:29.111 00:28:29.111 Elapsed time = 0.435 seconds 00:28:29.111 0 00:28:29.111 13:00:11 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 87199 00:28:29.111 13:00:11 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 87199 ']' 00:28:29.111 13:00:11 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 87199 00:28:29.111 13:00:11 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:28:29.111 13:00:11 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:29.111 13:00:11 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87199 00:28:29.111 killing process with pid 87199 00:28:29.111 13:00:11 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:29.111 13:00:11 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:29.111 13:00:11 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87199' 00:28:29.111 13:00:11 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 87199 00:28:29.111 13:00:11 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 87199 00:28:30.045 13:00:12 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:28:30.045 00:28:30.045 real 0m1.954s 00:28:30.045 user 0m4.821s 00:28:30.045 sys 0m0.299s 00:28:30.045 13:00:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:30.045 13:00:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:28:30.045 ************************************ 00:28:30.045 END TEST bdev_bounds 00:28:30.045 ************************************ 00:28:30.045 13:00:12 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:28:30.045 13:00:12 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:28:30.045 13:00:12 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:30.045 13:00:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:30.045 ************************************ 00:28:30.045 START TEST bdev_nbd 00:28:30.045 ************************************ 00:28:30.045 13:00:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:28:30.045 13:00:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:28:30.045 13:00:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:28:30.045 13:00:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:30.045 13:00:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:30.045 13:00:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:28:30.045 13:00:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:28:30.045 13:00:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:28:30.045 13:00:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:28:30.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:28:30.045 13:00:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:28:30.045 13:00:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:28:30.045 13:00:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:28:30.045 13:00:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:28:30.045 13:00:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:28:30.045 13:00:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:28:30.045 13:00:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:28:30.045 13:00:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=87253 00:28:30.045 13:00:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:28:30.045 13:00:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 87253 /var/tmp/spdk-nbd.sock 00:28:30.045 13:00:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 87253 ']' 00:28:30.045 13:00:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:28:30.045 13:00:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:30.045 13:00:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:28:30.045 13:00:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:30.045 13:00:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:28:30.045 13:00:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:28:30.045 [2024-12-05 13:00:12.455105] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:28:30.045 [2024-12-05 13:00:12.455232] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:30.045 [2024-12-05 13:00:12.614560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.303 [2024-12-05 13:00:12.716647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.867 13:00:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:30.867 13:00:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:28:30.867 13:00:13 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:28:30.867 13:00:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:30.867 13:00:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:28:30.867 13:00:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:28:30.867 13:00:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:28:30.867 13:00:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:30.867 13:00:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:28:30.867 13:00:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:28:30.867 13:00:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:28:30.867 13:00:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:28:30.867 13:00:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:28:30.867 13:00:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:28:30.867 13:00:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:28:31.124 13:00:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:28:31.124 13:00:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:28:31.124 13:00:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:28:31.124 13:00:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:28:31.124 13:00:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:28:31.124 13:00:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:31.124 13:00:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:31.124 13:00:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:28:31.124 13:00:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:28:31.124 13:00:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:31.124 13:00:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:31.124 13:00:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:31.124 1+0 records in 00:28:31.124 1+0 records out 00:28:31.124 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000476081 s, 8.6 MB/s 00:28:31.124 13:00:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:31.124 13:00:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:28:31.124 13:00:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:31.124 13:00:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:31.124 13:00:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:28:31.124 13:00:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:31.124 13:00:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:28:31.124 13:00:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:31.381 13:00:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:28:31.381 { 00:28:31.381 "nbd_device": "/dev/nbd0", 00:28:31.382 "bdev_name": "raid5f" 00:28:31.382 } 00:28:31.382 ]' 00:28:31.382 13:00:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:28:31.382 13:00:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:28:31.382 { 00:28:31.382 "nbd_device": "/dev/nbd0", 00:28:31.382 "bdev_name": "raid5f" 00:28:31.382 } 00:28:31.382 ]' 00:28:31.382 13:00:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:28:31.382 13:00:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:31.382 13:00:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:31.382 13:00:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:31.382 13:00:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:31.382 13:00:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:28:31.382 13:00:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:31.382 13:00:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:31.639 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:31.639 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:31.639 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:31.639 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:31.639 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:31.639 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:31.639 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:31.639 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:31.639 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:31.639 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:31.639 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:31.895 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:31.895 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:31.895 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:31.895 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:31.895 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:31.895 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:28:31.895 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:28:31.895 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:28:31.895 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:28:31.895 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:28:31.895 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:28:31.895 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:28:31.895 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:28:31.895 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:31.895 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:28:31.895 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:28:31.895 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:28:31.895 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:28:31.895 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:28:31.895 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:31.895 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:28:31.895 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:31.895 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:31.895 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:31.895 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:28:31.895 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:31.895 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:31.895 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:28:32.152 /dev/nbd0 00:28:32.152 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:32.152 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:32.152 13:00:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:28:32.152 13:00:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:28:32.152 13:00:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:32.152 13:00:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:32.152 13:00:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:28:32.152 13:00:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:28:32.152 13:00:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:32.152 13:00:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:32.152 13:00:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:32.152 1+0 records in 00:28:32.152 1+0 records out 00:28:32.152 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316629 s, 12.9 MB/s 00:28:32.152 13:00:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:32.152 13:00:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:28:32.152 13:00:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:32.152 13:00:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:32.152 13:00:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:28:32.152 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:32.152 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:32.152 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:32.152 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:32.152 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:32.453 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:28:32.453 { 00:28:32.453 "nbd_device": "/dev/nbd0", 00:28:32.453 "bdev_name": "raid5f" 00:28:32.453 } 00:28:32.453 ]' 00:28:32.453 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:28:32.453 { 00:28:32.453 "nbd_device": "/dev/nbd0", 00:28:32.453 "bdev_name": "raid5f" 00:28:32.453 } 00:28:32.453 ]' 00:28:32.453 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:32.453 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:28:32.453 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:28:32.453 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:32.453 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:28:32.453 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:28:32.453 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:28:32.453 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:28:32.453 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:28:32.453 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:28:32.453 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:32.453 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:28:32.453 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:32.453 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:28:32.453 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:28:32.453 256+0 records in 00:28:32.453 256+0 records out 00:28:32.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00453642 s, 231 MB/s 00:28:32.453 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:32.453 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:28:32.453 256+0 records in 00:28:32.453 256+0 records out 00:28:32.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0271286 s, 38.7 MB/s 00:28:32.453 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:28:32.453 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:28:32.453 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:32.453 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:28:32.453 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:32.453 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:28:32.453 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:28:32.453 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:32.453 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:28:32.453 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:32.453 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:32.453 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:32.453 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:32.453 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:32.453 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:28:32.453 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:32.453 13:00:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:32.710 13:00:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:32.710 13:00:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:32.710 13:00:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:32.710 13:00:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:32.710 13:00:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:32.710 13:00:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:32.710 13:00:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:32.710 13:00:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:32.710 13:00:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:32.710 13:00:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:32.710 13:00:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:32.710 13:00:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:32.710 13:00:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:32.710 13:00:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:32.967 13:00:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:32.967 13:00:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:32.967 13:00:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:28:32.967 13:00:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:28:32.967 13:00:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:28:32.967 13:00:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:28:32.967 13:00:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:28:32.967 13:00:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:28:32.967 13:00:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:28:32.967 13:00:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:32.967 13:00:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:32.967 13:00:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:28:32.967 13:00:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:28:32.967 malloc_lvol_verify 00:28:32.967 13:00:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:28:33.224 a9b9d8f8-d6d1-44ab-a299-b71d2517d671 00:28:33.224 13:00:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:28:33.481 3a1573d7-f6b9-4d10-8aea-08e24993acd3 00:28:33.481 13:00:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:28:33.738 /dev/nbd0 00:28:33.738 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:28:33.738 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:28:33.738 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:28:33.738 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:28:33.738 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:28:33.738 mke2fs 1.47.0 (5-Feb-2023) 00:28:33.738 Discarding device blocks: 0/4096 done 00:28:33.738 Creating filesystem with 4096 1k blocks and 1024 inodes 00:28:33.738 00:28:33.738 Allocating group tables: 0/1 done 00:28:33.738 Writing inode tables: 0/1 done 00:28:33.738 Creating journal (1024 blocks): done 00:28:33.738 Writing superblocks and filesystem accounting information: 0/1 done 00:28:33.738 00:28:33.738 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:33.738 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:33.738 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:33.738 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:33.738 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:28:33.738 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:33.738 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:33.995 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:33.995 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:33.995 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:33.995 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:33.995 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:33.995 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:33.995 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:33.995 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:33.995 13:00:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 87253 00:28:33.995 13:00:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 87253 ']' 00:28:33.995 13:00:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 87253 00:28:33.995 13:00:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:28:33.995 13:00:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:33.995 13:00:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87253 00:28:33.995 13:00:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:33.995 killing process with pid 87253 00:28:33.995 13:00:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:33.995 13:00:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87253' 00:28:33.996 13:00:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 87253 00:28:33.996 13:00:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 87253 00:28:34.947 13:00:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:28:34.947 00:28:34.947 real 0m5.026s 00:28:34.947 user 0m7.236s 00:28:34.947 sys 0m1.008s 00:28:34.947 13:00:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:34.947 13:00:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:28:34.947 ************************************ 00:28:34.947 END TEST bdev_nbd 00:28:34.947 ************************************ 00:28:34.947 13:00:17 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:28:34.947 13:00:17 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:28:34.947 13:00:17 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:28:34.947 13:00:17 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:28:34.947 13:00:17 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:34.947 13:00:17 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:34.948 13:00:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:34.948 ************************************ 00:28:34.948 START TEST bdev_fio 00:28:34.948 ************************************ 00:28:34.948 13:00:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:28:34.948 13:00:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:28:34.948 13:00:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:28:34.948 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:28:34.948 13:00:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:28:34.948 13:00:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:28:34.948 13:00:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:28:34.948 13:00:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:28:34.948 13:00:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:28:34.948 13:00:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:34.948 13:00:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:28:34.948 13:00:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:28:34.948 13:00:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:28:34.948 13:00:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:28:34.948 13:00:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:28:34.948 13:00:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:28:34.948 13:00:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:28:34.948 13:00:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:34.948 13:00:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:28:34.948 13:00:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:28:34.948 13:00:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:28:34.948 13:00:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:28:34.948 13:00:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:28:34.948 13:00:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:28:34.948 13:00:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:28:34.948 13:00:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:28:34.948 13:00:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:28:34.948 13:00:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:28:34.948 13:00:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:28:34.948 13:00:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:28:34.948 13:00:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:28:34.948 13:00:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:34.948 13:00:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:28:35.207 ************************************ 00:28:35.207 START TEST bdev_fio_rw_verify 00:28:35.207 ************************************ 00:28:35.207 13:00:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:28:35.207 13:00:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:28:35.207 13:00:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:28:35.207 13:00:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:35.207 13:00:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:28:35.207 13:00:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:35.207 13:00:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:28:35.207 13:00:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:28:35.207 13:00:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:35.207 13:00:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:28:35.207 13:00:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:35.207 13:00:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:35.207 13:00:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:35.207 13:00:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:35.207 13:00:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:28:35.207 13:00:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:35.207 13:00:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:28:35.207 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:35.207 fio-3.35 00:28:35.207 Starting 1 thread 00:28:47.397 00:28:47.397 job_raid5f: (groupid=0, jobs=1): err= 0: pid=87446: Thu Dec 5 13:00:28 2024 00:28:47.397 read: IOPS=11.3k, BW=44.0MiB/s (46.1MB/s)(440MiB/10001msec) 00:28:47.397 slat (usec): min=18, max=151, avg=21.97, stdev= 4.02 00:28:47.397 clat (usec): min=9, max=832, avg=145.27, stdev=56.57 00:28:47.397 lat (usec): min=29, max=859, avg=167.25, stdev=57.95 00:28:47.397 clat percentiles (usec): 00:28:47.397 | 50.000th=[ 143], 99.000th=[ 262], 99.900th=[ 412], 99.990th=[ 545], 00:28:47.397 | 99.999th=[ 775] 00:28:47.397 write: IOPS=11.8k, BW=46.0MiB/s (48.3MB/s)(455MiB/9883msec); 0 zone resets 00:28:47.397 slat (usec): min=7, max=202, avg=17.90, stdev= 3.95 00:28:47.397 clat (usec): min=54, max=1131, avg=322.67, stdev=63.72 00:28:47.397 lat (usec): min=70, max=1153, avg=340.57, stdev=66.11 00:28:47.397 clat percentiles (usec): 00:28:47.397 | 50.000th=[ 314], 99.000th=[ 553], 99.900th=[ 766], 99.990th=[ 971], 00:28:47.397 | 99.999th=[ 1106] 00:28:47.397 bw ( KiB/s): min=35504, max=54096, per=98.21%, avg=46281.68, stdev=6044.08, samples=19 00:28:47.397 iops : min= 8876, max=13524, avg=11570.42, stdev=1511.02, samples=19 00:28:47.397 lat (usec) : 10=0.01%, 20=0.01%, 50=0.01%, 100=12.88%, 250=38.88% 00:28:47.397 lat (usec) : 500=47.41%, 750=0.78%, 1000=0.06% 00:28:47.397 lat (msec) : 2=0.01% 00:28:47.397 cpu : usr=99.21%, sys=0.24%, ctx=27, majf=0, minf=9332 00:28:47.397 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:47.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:47.397 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:47.397 issued rwts: total=112652,116431,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:47.397 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:47.397 00:28:47.397 Run status group 0 (all jobs): 00:28:47.397 READ: bw=44.0MiB/s (46.1MB/s), 44.0MiB/s-44.0MiB/s (46.1MB/s-46.1MB/s), io=440MiB (461MB), run=10001-10001msec 00:28:47.397 WRITE: bw=46.0MiB/s (48.3MB/s), 46.0MiB/s-46.0MiB/s (48.3MB/s-48.3MB/s), io=455MiB (477MB), run=9883-9883msec 00:28:47.397 ----------------------------------------------------- 00:28:47.397 Suppressions used: 00:28:47.397 count bytes template 00:28:47.397 1 7 /usr/src/fio/parse.c 00:28:47.397 279 26784 /usr/src/fio/iolog.c 00:28:47.397 1 8 libtcmalloc_minimal.so 00:28:47.397 1 904 libcrypto.so 00:28:47.397 ----------------------------------------------------- 00:28:47.397 00:28:47.397 00:28:47.397 real 0m12.086s 00:28:47.397 user 0m12.789s 00:28:47.397 sys 0m0.497s 00:28:47.397 13:00:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:47.397 13:00:29 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:28:47.397 ************************************ 00:28:47.397 END TEST bdev_fio_rw_verify 00:28:47.397 ************************************ 00:28:47.397 13:00:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:28:47.397 13:00:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:47.397 13:00:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:28:47.397 13:00:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:47.397 13:00:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:28:47.397 13:00:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:28:47.397 13:00:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:28:47.397 13:00:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:28:47.397 13:00:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:28:47.397 13:00:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:28:47.397 13:00:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:28:47.397 13:00:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:47.397 13:00:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:28:47.397 13:00:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:28:47.397 13:00:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:28:47.397 13:00:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:28:47.397 13:00:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "8473002e-f422-431d-bcef-35497162c2c4"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "8473002e-f422-431d-bcef-35497162c2c4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "8473002e-f422-431d-bcef-35497162c2c4",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "3d15869a-4868-46a7-b71f-1586b5c29f0c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "92c903af-ec83-4bd2-a7ed-e730b7aebb9e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "0b8974ea-9532-4231-b39c-6a63e0143993",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:28:47.398 13:00:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:28:47.398 13:00:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:28:47.398 13:00:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:47.398 13:00:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:28:47.398 /home/vagrant/spdk_repo/spdk 00:28:47.398 13:00:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:28:47.398 13:00:29 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:28:47.398 00:28:47.398 real 0m12.260s 00:28:47.398 user 0m12.873s 00:28:47.398 sys 0m0.568s 00:28:47.398 13:00:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:47.398 ************************************ 00:28:47.398 13:00:29 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:28:47.398 END TEST bdev_fio 00:28:47.398 ************************************ 00:28:47.398 13:00:29 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:47.398 13:00:29 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:47.398 13:00:29 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:28:47.398 13:00:29 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:47.398 13:00:29 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:47.398 ************************************ 00:28:47.398 START TEST bdev_verify 00:28:47.398 ************************************ 00:28:47.398 13:00:29 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:47.398 [2024-12-05 13:00:29.809065] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:28:47.398 [2024-12-05 13:00:29.809193] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87609 ] 00:28:47.398 [2024-12-05 13:00:29.973486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:47.655 [2024-12-05 13:00:30.083325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.655 [2024-12-05 13:00:30.083566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.913 Running I/O for 5 seconds... 00:28:50.252 14904.00 IOPS, 58.22 MiB/s [2024-12-05T13:00:33.791Z] 16322.50 IOPS, 63.76 MiB/s [2024-12-05T13:00:34.751Z] 16572.00 IOPS, 64.73 MiB/s [2024-12-05T13:00:35.689Z] 16751.25 IOPS, 65.43 MiB/s [2024-12-05T13:00:35.689Z] 17288.20 IOPS, 67.53 MiB/s 00:28:53.102 Latency(us) 00:28:53.102 [2024-12-05T13:00:35.689Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:53.102 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:53.102 Verification LBA range: start 0x0 length 0x2000 00:28:53.102 raid5f : 5.01 8773.04 34.27 0.00 0.00 21692.94 181.96 22080.59 00:28:53.102 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:53.102 Verification LBA range: start 0x2000 length 0x2000 00:28:53.102 raid5f : 5.01 8518.25 33.27 0.00 0.00 22595.45 96.49 23088.84 00:28:53.102 [2024-12-05T13:00:35.689Z] =================================================================================================================== 00:28:53.102 [2024-12-05T13:00:35.689Z] Total : 17291.29 67.54 0.00 0.00 22137.68 96.49 23088.84 00:28:53.671 00:28:53.671 real 0m6.489s 00:28:53.671 user 0m12.099s 00:28:53.671 sys 0m0.205s 00:28:53.671 13:00:36 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:53.671 ************************************ 00:28:53.671 END TEST bdev_verify 00:28:53.671 ************************************ 00:28:53.671 13:00:36 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:28:53.930 13:00:36 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:53.930 13:00:36 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:28:53.930 13:00:36 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:53.930 13:00:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:28:53.930 ************************************ 00:28:53.930 START TEST bdev_verify_big_io 00:28:53.930 ************************************ 00:28:53.930 13:00:36 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:53.930 [2024-12-05 13:00:36.364421] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:28:53.930 [2024-12-05 13:00:36.364612] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87702 ] 00:28:54.227 [2024-12-05 13:00:36.534364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:54.227 [2024-12-05 13:00:36.619682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:54.227 [2024-12-05 13:00:36.619792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:54.494 Running I/O for 5 seconds... 00:28:56.806 887.00 IOPS, 55.44 MiB/s [2024-12-05T13:00:40.353Z] 1015.00 IOPS, 63.44 MiB/s [2024-12-05T13:00:41.288Z] 1057.67 IOPS, 66.10 MiB/s [2024-12-05T13:00:42.231Z] 1094.50 IOPS, 68.41 MiB/s [2024-12-05T13:00:42.231Z] 1116.80 IOPS, 69.80 MiB/s 00:28:59.644 Latency(us) 00:28:59.644 [2024-12-05T13:00:42.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.644 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:59.644 Verification LBA range: start 0x0 length 0x200 00:28:59.644 raid5f : 5.07 526.23 32.89 0.00 0.00 5970438.71 130.76 282308.92 00:28:59.644 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:59.644 Verification LBA range: start 0x200 length 0x200 00:28:59.644 raid5f : 5.20 610.25 38.14 0.00 0.00 5140343.88 133.91 246818.66 00:28:59.644 [2024-12-05T13:00:42.231Z] =================================================================================================================== 00:28:59.644 [2024-12-05T13:00:42.231Z] Total : 1136.48 71.03 0.00 0.00 5519430.00 130.76 282308.92 00:29:00.586 00:29:00.586 real 0m6.643s 00:29:00.586 user 0m12.393s 00:29:00.586 sys 0m0.217s 00:29:00.586 13:00:42 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:00.586 13:00:42 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:29:00.586 ************************************ 00:29:00.586 END TEST bdev_verify_big_io 00:29:00.586 ************************************ 00:29:00.586 13:00:42 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:00.586 13:00:42 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:29:00.586 13:00:42 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:00.586 13:00:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:29:00.586 ************************************ 00:29:00.586 START TEST bdev_write_zeroes 00:29:00.586 ************************************ 00:29:00.586 13:00:42 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:00.586 [2024-12-05 13:00:43.020511] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:29:00.586 [2024-12-05 13:00:43.020631] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87789 ] 00:29:00.844 [2024-12-05 13:00:43.175810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.844 [2024-12-05 13:00:43.262126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.102 Running I/O for 1 seconds... 00:29:02.037 28647.00 IOPS, 111.90 MiB/s 00:29:02.037 Latency(us) 00:29:02.037 [2024-12-05T13:00:44.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:02.037 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:02.037 raid5f : 1.01 28609.49 111.76 0.00 0.00 4460.68 1260.31 6125.10 00:29:02.037 [2024-12-05T13:00:44.624Z] =================================================================================================================== 00:29:02.037 [2024-12-05T13:00:44.624Z] Total : 28609.49 111.76 0.00 0.00 4460.68 1260.31 6125.10 00:29:02.993 00:29:02.993 real 0m2.378s 00:29:02.993 user 0m2.081s 00:29:02.993 sys 0m0.173s 00:29:02.993 13:00:45 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:02.993 13:00:45 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:29:02.993 ************************************ 00:29:02.993 END TEST bdev_write_zeroes 00:29:02.993 ************************************ 00:29:02.993 13:00:45 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:02.993 13:00:45 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:29:02.993 13:00:45 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:02.993 13:00:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:29:02.993 ************************************ 00:29:02.993 START TEST bdev_json_nonenclosed 00:29:02.993 ************************************ 00:29:02.993 13:00:45 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:02.993 [2024-12-05 13:00:45.440507] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:29:02.993 [2024-12-05 13:00:45.440624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87832 ] 00:29:03.250 [2024-12-05 13:00:45.596106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:03.250 [2024-12-05 13:00:45.680915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:03.250 [2024-12-05 13:00:45.680992] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:29:03.250 [2024-12-05 13:00:45.681010] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:29:03.250 [2024-12-05 13:00:45.681024] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:03.251 00:29:03.251 real 0m0.450s 00:29:03.251 user 0m0.251s 00:29:03.251 sys 0m0.095s 00:29:03.251 13:00:45 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:03.251 13:00:45 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:29:03.251 ************************************ 00:29:03.251 END TEST bdev_json_nonenclosed 00:29:03.251 ************************************ 00:29:03.508 13:00:45 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:03.508 13:00:45 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:29:03.508 13:00:45 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:03.508 13:00:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:29:03.508 ************************************ 00:29:03.508 START TEST bdev_json_nonarray 00:29:03.508 ************************************ 00:29:03.508 13:00:45 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:03.508 [2024-12-05 13:00:45.932413] Starting SPDK v25.01-pre git sha1 2cae84b3c / DPDK 24.03.0 initialization... 00:29:03.508 [2024-12-05 13:00:45.932550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87858 ] 00:29:03.508 [2024-12-05 13:00:46.088025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:03.767 [2024-12-05 13:00:46.172791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:03.767 [2024-12-05 13:00:46.172877] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:29:03.767 [2024-12-05 13:00:46.172892] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:29:03.767 [2024-12-05 13:00:46.172905] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:03.767 00:29:03.767 real 0m0.452s 00:29:03.767 user 0m0.256s 00:29:03.767 sys 0m0.092s 00:29:03.767 13:00:46 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:03.767 13:00:46 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:29:03.767 ************************************ 00:29:03.767 END TEST bdev_json_nonarray 00:29:03.767 ************************************ 00:29:03.767 13:00:46 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:29:03.767 13:00:46 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:29:03.767 13:00:46 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:29:03.767 13:00:46 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:29:03.767 13:00:46 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:29:03.767 13:00:46 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:29:03.767 13:00:46 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:04.025 13:00:46 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:29:04.025 13:00:46 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:29:04.025 13:00:46 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:29:04.025 13:00:46 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:29:04.025 ************************************ 00:29:04.025 END TEST blockdev_raid5f 00:29:04.025 ************************************ 00:29:04.025 00:29:04.025 real 0m40.673s 00:29:04.025 user 0m56.466s 00:29:04.025 sys 0m3.510s 00:29:04.025 13:00:46 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:04.025 13:00:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:29:04.025 13:00:46 -- spdk/autotest.sh@194 -- # uname -s 00:29:04.025 13:00:46 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:29:04.025 13:00:46 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:29:04.025 13:00:46 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:29:04.025 13:00:46 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:29:04.025 13:00:46 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:29:04.025 13:00:46 -- spdk/autotest.sh@260 -- # timing_exit lib 00:29:04.025 13:00:46 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:04.025 13:00:46 -- common/autotest_common.sh@10 -- # set +x 00:29:04.025 13:00:46 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:29:04.025 13:00:46 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:29:04.025 13:00:46 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:29:04.025 13:00:46 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:29:04.025 13:00:46 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:29:04.025 13:00:46 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:29:04.025 13:00:46 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:29:04.025 13:00:46 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:04.025 13:00:46 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:29:04.025 13:00:46 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:04.025 13:00:46 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:04.025 13:00:46 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:29:04.025 13:00:46 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:29:04.025 13:00:46 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:29:04.025 13:00:46 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:29:04.025 13:00:46 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:29:04.025 13:00:46 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:29:04.025 13:00:46 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:29:04.025 13:00:46 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:29:04.025 13:00:46 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:29:04.025 13:00:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:04.026 13:00:46 -- common/autotest_common.sh@10 -- # set +x 00:29:04.026 13:00:46 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:29:04.026 13:00:46 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:29:04.026 13:00:46 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:29:04.026 13:00:46 -- common/autotest_common.sh@10 -- # set +x 00:29:04.962 INFO: APP EXITING 00:29:04.962 INFO: killing all VMs 00:29:04.962 INFO: killing vhost app 00:29:04.962 INFO: EXIT DONE 00:29:05.221 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:05.221 Waiting for block devices as requested 00:29:05.221 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:05.480 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:06.047 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:06.047 Cleaning 00:29:06.047 Removing: /var/run/dpdk/spdk0/config 00:29:06.047 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:06.047 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:06.047 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:06.047 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:06.047 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:06.047 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:06.047 Removing: /dev/shm/spdk_tgt_trace.pid56058 00:29:06.047 Removing: /var/run/dpdk/spdk0 00:29:06.047 Removing: /var/run/dpdk/spdk_pid55851 00:29:06.047 Removing: /var/run/dpdk/spdk_pid56058 00:29:06.047 Removing: /var/run/dpdk/spdk_pid56270 00:29:06.047 Removing: /var/run/dpdk/spdk_pid56363 00:29:06.047 Removing: /var/run/dpdk/spdk_pid56403 00:29:06.047 Removing: /var/run/dpdk/spdk_pid56520 00:29:06.047 Removing: /var/run/dpdk/spdk_pid56538 00:29:06.047 Removing: /var/run/dpdk/spdk_pid56731 00:29:06.047 Removing: /var/run/dpdk/spdk_pid56830 00:29:06.047 Removing: /var/run/dpdk/spdk_pid56921 00:29:06.047 Removing: /var/run/dpdk/spdk_pid57031 00:29:06.047 Removing: /var/run/dpdk/spdk_pid57123 00:29:06.047 Removing: /var/run/dpdk/spdk_pid57168 00:29:06.047 Removing: /var/run/dpdk/spdk_pid57199 00:29:06.047 Removing: /var/run/dpdk/spdk_pid57275 00:29:06.047 Removing: /var/run/dpdk/spdk_pid57375 00:29:06.047 Removing: /var/run/dpdk/spdk_pid57806 00:29:06.047 Removing: /var/run/dpdk/spdk_pid57859 00:29:06.047 Removing: /var/run/dpdk/spdk_pid57922 00:29:06.047 Removing: /var/run/dpdk/spdk_pid57938 00:29:06.047 Removing: /var/run/dpdk/spdk_pid58035 00:29:06.047 Removing: /var/run/dpdk/spdk_pid58051 00:29:06.047 Removing: /var/run/dpdk/spdk_pid58148 00:29:06.047 Removing: /var/run/dpdk/spdk_pid58159 00:29:06.047 Removing: /var/run/dpdk/spdk_pid58212 00:29:06.047 Removing: /var/run/dpdk/spdk_pid58230 00:29:06.047 Removing: /var/run/dpdk/spdk_pid58283 00:29:06.047 Removing: /var/run/dpdk/spdk_pid58301 00:29:06.047 Removing: /var/run/dpdk/spdk_pid58455 00:29:06.047 Removing: /var/run/dpdk/spdk_pid58492 00:29:06.047 Removing: /var/run/dpdk/spdk_pid58581 00:29:06.047 Removing: /var/run/dpdk/spdk_pid59813 00:29:06.047 Removing: /var/run/dpdk/spdk_pid60013 00:29:06.047 Removing: /var/run/dpdk/spdk_pid60142 00:29:06.047 Removing: /var/run/dpdk/spdk_pid60755 00:29:06.047 Removing: /var/run/dpdk/spdk_pid60951 00:29:06.047 Removing: /var/run/dpdk/spdk_pid61086 00:29:06.047 Removing: /var/run/dpdk/spdk_pid61686 00:29:06.047 Removing: /var/run/dpdk/spdk_pid61999 00:29:06.047 Removing: /var/run/dpdk/spdk_pid62134 00:29:06.047 Removing: /var/run/dpdk/spdk_pid63444 00:29:06.047 Removing: /var/run/dpdk/spdk_pid63686 00:29:06.047 Removing: /var/run/dpdk/spdk_pid63815 00:29:06.047 Removing: /var/run/dpdk/spdk_pid65128 00:29:06.047 Removing: /var/run/dpdk/spdk_pid65370 00:29:06.047 Removing: /var/run/dpdk/spdk_pid65499 00:29:06.047 Removing: /var/run/dpdk/spdk_pid66818 00:29:06.047 Removing: /var/run/dpdk/spdk_pid67236 00:29:06.047 Removing: /var/run/dpdk/spdk_pid67376 00:29:06.047 Removing: /var/run/dpdk/spdk_pid68784 00:29:06.047 Removing: /var/run/dpdk/spdk_pid69026 00:29:06.047 Removing: /var/run/dpdk/spdk_pid69161 00:29:06.047 Removing: /var/run/dpdk/spdk_pid70570 00:29:06.047 Removing: /var/run/dpdk/spdk_pid70818 00:29:06.047 Removing: /var/run/dpdk/spdk_pid70947 00:29:06.047 Removing: /var/run/dpdk/spdk_pid72356 00:29:06.047 Removing: /var/run/dpdk/spdk_pid72821 00:29:06.047 Removing: /var/run/dpdk/spdk_pid72956 00:29:06.047 Removing: /var/run/dpdk/spdk_pid73088 00:29:06.047 Removing: /var/run/dpdk/spdk_pid73483 00:29:06.047 Removing: /var/run/dpdk/spdk_pid74194 00:29:06.047 Removing: /var/run/dpdk/spdk_pid74550 00:29:06.047 Removing: /var/run/dpdk/spdk_pid75211 00:29:06.047 Removing: /var/run/dpdk/spdk_pid75637 00:29:06.047 Removing: /var/run/dpdk/spdk_pid76361 00:29:06.047 Removing: /var/run/dpdk/spdk_pid76760 00:29:06.047 Removing: /var/run/dpdk/spdk_pid78632 00:29:06.047 Removing: /var/run/dpdk/spdk_pid79048 00:29:06.047 Removing: /var/run/dpdk/spdk_pid79467 00:29:06.047 Removing: /var/run/dpdk/spdk_pid81464 00:29:06.047 Removing: /var/run/dpdk/spdk_pid81927 00:29:06.047 Removing: /var/run/dpdk/spdk_pid82427 00:29:06.047 Removing: /var/run/dpdk/spdk_pid83468 00:29:06.047 Removing: /var/run/dpdk/spdk_pid83774 00:29:06.047 Removing: /var/run/dpdk/spdk_pid84678 00:29:06.047 Removing: /var/run/dpdk/spdk_pid84984 00:29:06.047 Removing: /var/run/dpdk/spdk_pid85881 00:29:06.047 Removing: /var/run/dpdk/spdk_pid86199 00:29:06.047 Removing: /var/run/dpdk/spdk_pid86852 00:29:06.047 Removing: /var/run/dpdk/spdk_pid87112 00:29:06.047 Removing: /var/run/dpdk/spdk_pid87162 00:29:06.047 Removing: /var/run/dpdk/spdk_pid87199 00:29:06.047 Removing: /var/run/dpdk/spdk_pid87436 00:29:06.047 Removing: /var/run/dpdk/spdk_pid87609 00:29:06.047 Removing: /var/run/dpdk/spdk_pid87702 00:29:06.047 Removing: /var/run/dpdk/spdk_pid87789 00:29:06.047 Removing: /var/run/dpdk/spdk_pid87832 00:29:06.047 Removing: /var/run/dpdk/spdk_pid87858 00:29:06.047 Clean 00:29:06.305 13:00:48 -- common/autotest_common.sh@1453 -- # return 0 00:29:06.305 13:00:48 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:29:06.305 13:00:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:06.305 13:00:48 -- common/autotest_common.sh@10 -- # set +x 00:29:06.305 13:00:48 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:29:06.305 13:00:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:06.305 13:00:48 -- common/autotest_common.sh@10 -- # set +x 00:29:06.305 13:00:48 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:06.305 13:00:48 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:29:06.305 13:00:48 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:29:06.305 13:00:48 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:29:06.305 13:00:48 -- spdk/autotest.sh@398 -- # hostname 00:29:06.306 13:00:48 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:29:06.306 geninfo: WARNING: invalid characters removed from testname! 00:29:28.238 13:01:09 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:30.159 13:01:12 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:32.092 13:01:14 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:33.990 13:01:16 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:35.889 13:01:18 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:37.795 13:01:20 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:40.322 13:01:22 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:40.322 13:01:22 -- spdk/autorun.sh@1 -- $ timing_finish 00:29:40.322 13:01:22 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:29:40.322 13:01:22 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:40.322 13:01:22 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:29:40.322 13:01:22 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:40.322 + [[ -n 4979 ]] 00:29:40.322 + sudo kill 4979 00:29:40.329 [Pipeline] } 00:29:40.346 [Pipeline] // timeout 00:29:40.351 [Pipeline] } 00:29:40.366 [Pipeline] // stage 00:29:40.372 [Pipeline] } 00:29:40.387 [Pipeline] // catchError 00:29:40.397 [Pipeline] stage 00:29:40.400 [Pipeline] { (Stop VM) 00:29:40.414 [Pipeline] sh 00:29:40.691 + vagrant halt 00:29:43.969 ==> default: Halting domain... 00:29:49.237 [Pipeline] sh 00:29:49.514 + vagrant destroy -f 00:29:52.793 ==> default: Removing domain... 00:29:52.803 [Pipeline] sh 00:29:53.078 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:29:53.085 [Pipeline] } 00:29:53.099 [Pipeline] // stage 00:29:53.104 [Pipeline] } 00:29:53.118 [Pipeline] // dir 00:29:53.122 [Pipeline] } 00:29:53.134 [Pipeline] // wrap 00:29:53.139 [Pipeline] } 00:29:53.149 [Pipeline] // catchError 00:29:53.157 [Pipeline] stage 00:29:53.159 [Pipeline] { (Epilogue) 00:29:53.171 [Pipeline] sh 00:29:53.449 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:00.018 [Pipeline] catchError 00:30:00.019 [Pipeline] { 00:30:00.032 [Pipeline] sh 00:30:00.309 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:00.309 Artifacts sizes are good 00:30:00.318 [Pipeline] } 00:30:00.335 [Pipeline] // catchError 00:30:00.350 [Pipeline] archiveArtifacts 00:30:00.358 Archiving artifacts 00:30:00.482 [Pipeline] cleanWs 00:30:00.495 [WS-CLEANUP] Deleting project workspace... 00:30:00.495 [WS-CLEANUP] Deferred wipeout is used... 00:30:00.500 [WS-CLEANUP] done 00:30:00.502 [Pipeline] } 00:30:00.518 [Pipeline] // stage 00:30:00.523 [Pipeline] } 00:30:00.537 [Pipeline] // node 00:30:00.542 [Pipeline] End of Pipeline 00:30:00.574 Finished: SUCCESS